918 resultados para Limits of indemnity
Resumo:
We have carried out high contrast imaging of 70 young, nearby B and A stars to search for brown dwarf and planetary companions as part of the Gemini NICI Planet-Finding Campaign. Our survey represents the largest, deepest survey for planets around high-mass stars (≈1.5-2.5 M ☉) conducted to date and includes the planet hosts β Pic and Fomalhaut. We obtained follow-up astrometry of all candidate companions within 400 AU projected separation for stars in uncrowded fields and identified new low-mass companions to HD 1160 and HIP 79797. We have found that the previously known young brown dwarf companion to HIP 79797 is itself a tight (3 AU) binary, composed of brown dwarfs with masses 58$^{+21}_{-20}$ M Jup and 55$^{+20}_{-19}$ M Jup, making this system one of the rare substellar binaries in orbit around a star. Considering the contrast limits of our NICI data and the fact that we did not detect any planets, we use high-fidelity Monte Carlo simulations to show that fewer than 20% of 2 M ☉ stars can have giant planets greater than 4 M Jup between 59 and 460 AU at 95% confidence, and fewer than 10% of these stars can have a planet more massive than 10 M Jup between 38 and 650 AU. Overall, we find that large-separation giant planets are not common around B and A stars: fewer than 10% of B and A stars can have an analog to the HR 8799 b (7 M Jup, 68 AU) planet at 95% confidence. We also describe a new Bayesian technique for determining the ages of field B and A stars from photometry and theoretical isochrones. Our method produces more plausible ages for high-mass stars than previous age-dating techniques, which tend to underestimate stellar ages and their uncertainties.
Resumo:
[EN]This study presents the evaluation of seven pharmaceutical compounds belonging to different commonly used therapeutic classes in seawater samples from coastal areas of Gran Canaria Island. The target compounds include atenolol (antihypertensive), acetaminophen (analgesic), norfloxacin and ciprofloxacin (antibiotics), carbamazepine (antiepileptic) and ketoprofen and diclofenac (anti-inflammatory). Solid phase extraction (SPE) was used for the extraction and preconcentration of the samples, and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for the determination of the compounds. Under optimal conditions, the recoveries obtained were in the range of 78.3% to 98.2%, and the relative standard deviations were less than 11.8%. The detection and quantification limits of the method were in the ranges of 0.1–2.8 and 0.3–9.3 ng·L−1, respectively. The developed method was applied to evaluate the presence of these pharmaceutical compounds in seawater from four outfalls in Gran Canaria Island (Spain) during one year. Ciprofloxacin and norfloxacin were found in a large number of samples in a concentration range of 9.0–3551.7 ng·L−1. Low levels of diclofenac, acetaminophen and ketoprofen were found sporadically.
Resumo:
Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.
Resumo:
The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.
Resumo:
Monitoring foetal health is a very important task in clinical practice to appropriately plan pregnancy management and delivery. In the third trimester of pregnancy, ultrasound cardiotocography is the most employed diagnostic technique: foetal heart rate and uterine contractions signals are simultaneously recorded and analysed in order to ascertain foetal health. Because ultrasound cardiotocography interpretation still lacks of complete reliability, new parameters and methods of interpretation, or alternative methodologies, are necessary to further support physicians’ decisions. To this aim, in this thesis, foetal phonocardiography and electrocardiography are considered as different techniques. Further, variability of foetal heart rate is thoroughly studied. Frequency components and their modifications can be analysed by applying a time-frequency approach, for a distinct understanding of the spectral components and their change over time related to foetal reactions to internal and external stimuli (such as uterine contractions). Such modifications of the power spectrum can be a sign of autonomic nervous system reactions and therefore represent additional, objective information about foetal reactivity and health. However, some limits of ultrasonic cardiotocography still remain, such as in long-term foetal surveillance, which is often recommendable mainly in risky pregnancies. In these cases, the fully non-invasive acoustic recording, foetal phonocardiography, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the so recorded foetal heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. A new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings is presented in this thesis. Different filtering and enhancement techniques, to enhance the first foetal heart sounds, were applied, so that different signal processing techniques were implemented, evaluated and compared, by identifying the strategy characterized on average by the best results. In particular, phonocardiographic signals were recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by the developed algorithm and the other provided by cardiotocographic device). The algorithm performances were tested on phonocardiographic signals recorded on pregnant women, showing reliable foetal heart rate signals, very close to the ultrasound cardiotocographic recordings, considered as reference. The algorithm was also tested by using a foetal phonocardiographic recording simulator developed and presented in this research thesis. The target was to provide a software for simulating recordings relative to different foetal conditions and recordings situations and to use it as a test tool for comparing and assessing different foetal heart rate extraction algorithms. Since there are few studies about foetal heart sounds time characteristics and frequency content and the available literature is poor and not rigorous in this area, a data collection pilot study was also conducted with the purpose of specifically characterising both foetal and maternal heart sounds. Finally, in this thesis, the use of foetal phonocardiographic and electrocardiographic methodology and their combination, are presented in order to detect foetal heart rate and other functioning anomalies. The developed methodologies, suitable for longer-term assessment, were able to detect heart beat events correctly, such as first and second heart sounds and QRS waves. The detection of such events provides reliable measures of foetal heart rate, potentially information about measurement of the systolic time intervals and foetus circulatory impedance.
Resumo:
For the safety assessments of nuclear waste repositories, the possible migration of the radiotoxic waste into environment must be considered. Since plutonium is the major contribution at the radiotoxicity of spent nuclear waste, it requires special care with respect to its mobilization into the groundwater. Plutonium has one of the most complicated chemistry of all elements. It can coexist in 4 oxidation states parallel in one solution. In this work is shown that in the presence of humic substances it is reduced to the Pu(III) and Pu(IV). This work has the focus on the interaction of Pu(III) with natural occurring compounds (humic substances and clay minerals bzw. Kaolinite), while Pu(IV) was studied in a parallel doctoral work by Banik (in preparation). As plutonium is expected under extreme low concentrations in the environment, very sensitive methods are needed to monitor its presence and for its speciation. Resonance ionization mass spectrometry (RIMS), was used for determining the concentration of Pu in environmental samples, with a detection limit of 106- 107 atoms. For the speciation of plutonium CE-ICP-MS was routinely used to monitor the behaviour of Pu in the presence of humic substances. In order to reduce the detection limits of the speciation methods, the coupling of CE to RIMS was proposed. The first steps have shown that this can be a powerful tool for studies of pu under environmental conditions. Further, the first steps in the coupling of two parallel working detectors (DAD and ICP_MS ) to CE was performed, for the enabling a precise study of the complexation constants of plutonium with humic substances. The redox stabilization of Pu(III) was studied and it was determined that NH2OHHCl can maintain Pu(III) in the reduced form up to pH 5.5 – 6. The complexation constants of Pu(III) with Aldrich humic acid (AHA) were determined at pH 3 and 4. the logß = 6.2 – 6.8 found for these experiments was comparable with the literature. The sorption of Pu(III) onto kaolinite was studied in batch experiments and it was determine dthat the pH edge was at pH ~ 5.5. The speciation of plutonium on the surface of kaolinite was studied by EXAFS/XANES. It was determined that the sorbed species was Pu(IV). The influence of AHA on the sorption of Pu(III) onto kaolinite was also investigated. It was determined that at pH < 5 the adsorption is enhanced by the presence of AHA (25 mg/L), while at pH > 6 the adsorption is strongly impaired (depending also on the adding sequence of the components), leading to a mobilization of plutonium in solution.
Resumo:
The thesis objectives are to develop new methodologies for study of the space and time variability of Italian upper ocean ecosystem through the combined use of multi-sensors satellite data and in situ observations and to identify the capability and limits of remote sensing observations to monitor the marine state at short and long time scales. Three oceanographic basins have been selected and subjected to different types of analyses. The first region is the Tyrrhenian Sea where a comparative analysis of altimetry and lagrangian measurements was carried out to study the surface circulation. The results allowed to deepen the knowledge of the Tyrrhenian Sea surface dynamics and its variability and to defined the limitations of satellite altimetry measurements to detect small scale marine circulation features. Channel of Sicily study aimed to identify the spatial-temporal variability of phytoplankton biomass and to understand the impact of the upper ocean circulation on the marine ecosystem. An combined analysis of the satellite of long term time series of chlorophyll, Sea Surface Temperature and Sea Level field data was applied. The results allowed to identify the key role of the Atlantic water inflow in modulating the seasonal variability of the phytoplankton biomass in the region. Finally, Italian coastal marine system was studied with the objective to explore the potential capability of Ocean Color data in detecting chlorophyll trend in coastal areas. The most appropriated methodology to detect long term environmental changes was defined through intercomparison of chlorophyll trends detected by in situ and satellite. Then, Italian coastal areas subject to eutrophication problems were identified. This work has demonstrated that satellites data constitute an unique opportunity to define the features and forcing influencing the upper ocean ecosystems dynamics and can be used also to monitor environmental variables capable of influencing phytoplankton productivity.
Resumo:
Perfluoroalkylated substances are a group of chemicals that have been largely employed during the last 60 years in several applications, widely spreading and accumulating in the environment due to their extreme resistance to degradation. As a consequence, they have been found also in various types of food as well as in drinking water, proving that they can easily reach humans through the diet. The available information concerning their adverse effects on health has recently increased the interest towards these contaminants and highlighted the importance of investigating all the potential sources of human exposure, among which diet was proved to be the most relevant. This need has been underlined by the European Union through Recommendation 2010/161/EU: in this document, Member States were called to monitor their presence of in food, producing accurate estimations of human exposure. The purpose of the research presented in this thesis, which is the result of a partnership between an Italian and a French laboratory, was to develop reliable tools for the analysis of these pollutants in food, to be used for generating data on potentially contaminated matrices. An efficient method based on liquid chromatography-mass spectrometry for the detection of 16 different perfluorinated compounds in milk has been validated in accordance with current European regulation guidelines (2002/657/EC) and is currently under evaluation for ISO 17025 accreditation. The proposed technique was applied to cow, powder and human breast milk samples from Italy and France to produce a preliminary monitoring on the presence of these contaminants. In accordance with the above mentioned European Recommendation, this project led also to the development of a promising technique for the quantification of some precursors of these substances in fish. This method showed extremely satisfying performances in terms of linearity and limits of detection, and will be useful for future surveys.
Resumo:
In the last 20-30 years, the implementation of new technologies from the research centres to the food industry process was very fast. The infrared thermography is a tool used in many fields, including agriculture and food science technology, because of it's important qualities like non-destructive method, it is fast, it is accurate, it is repeatable and economical. Almost all the industrial food processors have to use the thermal process to obtain an optimal product respecting the quality and safety standards. The control of temperature of food products during the production, transportation, storage and sales is an essential process in the food industry network. This tool can minimize the human error during the control of heat operation, and reduce the costs with personal. In this thesis the application of infrared thermography (IRT) was studies for different products that need a thermal process during the food processing. The background of thermography was presented, and also some of its applications in food industry, with the benefits and limits of applicability. The measurement of the temperature of the egg shell during the heat treatment in natural convection and with hot-air treatment was compared with the calculated temperatures obtained by a simplified finite element model made in the past. The complete process shown a good results between calculated and observed temperatures and we can say that this technique can be useful to control the heat treatments for decontamination of egg using the infrared thermography. Other important application of IRT was to determine the evolution of emissivity of potato raw during the freezing process and the control non-destructive control of this process. We can conclude that the IRT can represent a real option for the control of thermal process from the food industry, but more researches on various products are necessary.
Resumo:
The purpose of this research is to provide empirical evidence on determinants of the economic use of patented inventions in order to contribute to the literature on technology and innovation management. The current work consists of three main parts, each of which constitutes a self-consistent research paper. The first paper uses a meta-analytic approach to review and synthesize the existing body of empirical research on the determinants of technology licensing. The second paper investigates the factors affecting the choice between the following alternative economic uses of patented inventions: pure internal use, pure licensing, and mixed use. Finally, the third paper explores the least studied option of the economic use of patented inventions, namely, the sale of patent rights. The data to empirically test the hypotheses come from a large-scale survey of European Patent inventors resident in 21 European countries, Japan, and US. The findings provided in this dissertation contribute to a better understanding of the economic use of patented inventions by expanding the limits of previous research in several different dimensions.
Resumo:
In this study the Aerodyne Aerosol Mass Spectrometer (AMS) was used during three laboratory measurement campaigns, FROST1, FROST2 and ACI-03. The FROST campaigns took place at the Leipzig Aerosol Cloud Interaction Simulator (LACIS) at the IfT in Leipzig and the ACI-03 campaign was conducted at the AIDA facility at the Karlsruhe Institute of Technology (KIT). In all three campaigns, the effect of coatings on mineral dust ice nuclei (IN) was investigated. During the FROST campaigns, Arizona Test Dust (ATD) particles of 200, 300 and 400 nm diameter were coated with thin coatings (< 7 nm) of sulphuric acid. At these very thin coatings, the AMS was operated close to its detection limits. Up to now it was not possible to accurately determine AMS detection limits during regular measurements. Therefore, the mathematical tools to analyse the detection limits of the AMS have been improved in this work. It is now possible to calculate detection limits of the AMS under operating conditions, without losing precious time by sampling through a particle filter. The instrument was characterised in more detail to enable correct quantification of the sulphate loadings on the ATD particle surfaces. Correction factors for the instrument inlet transmission, the collection efficiency, and the relative ionisation efficiency have been determined. With these corrections it was possible to quantify the sulphate mass per particle on the ATD after the condensation of sulphuric acid on its surface. The AMS results have been combined with the ice nucleus counter results. This revealed that the IN-efficiency of ATD is reduced when it is coated with sulphuric acid. The reason for this reduction is a chemical reaction of sulphuric acid with the particle's surface. These reactions are increasingly taking place when the aerosol is humidified or heated after the coating with sulphuric acid. A detailed analysis of the solubility and the evaporation temperature of the surface reaction products revealed that most likely aluminium sulphate is produced in these reactions.
Resumo:
The steadily increasing diversity of colloidal systems demands for new theoretical approaches and a cautious experimental characterization. Here we present a combined rheological and microscopical study of colloids in their arrested state whereas we did not aim for a generalized treatise but rather focused on a few model colloids, liquid crystal based colloidal suspensions and sedimented colloidal films. We laid special emphasis on the understanding of the mutual influence of dominant interaction mechanisms, structural characteristics and the particle properties on the mechanical behavior of the colloid. The application of novel combinations of experimental techniques played an important role in these studies. Beside of piezo-rheometry we employed nanoindentation experiments and associated standardized analysis procedures. These rheometric methods were complemented by real space images using confocal microscopy. The flexibility of the home-made setup allowed for a combination of both techniques and thereby for a simultaneous rheological and three-dimensional structural analysis on a single particle level. Though, the limits of confocal microscopy are not reached by now. We show how hollow and optically anisotropic particles can be utilized to quantify contact forces and rotational motions for individual particles. In future such data can contribute to a better understanding of particle reorganization processes, such as the liquidation of colloidal gels and glasses under shear.
Resumo:
Ein wesentlicher Anteil an organischem Kohlenstoff, der in der Atmosphäre vorhanden ist, wird als leichtflüchtige organische Verbindungen gefunden. Diese werden überwiegend durch die Biosphäre freigesetzt. Solche biogenen Emissionen haben einen großen Einfluss auf die chemischen und physikalischen Eigenschaften der Atmosphäre, indem sie zur Bildung von bodennahem Ozon und sekundären organischen Aerosolen beitragen. Um die Bildung von bodennahem Ozon und von sekundären organischen Aerosolen besser zu verstehen, ist die technische Fähigkeit zur genauen Messung der Summe dieser flüchtigen organischen Substanzen notwendig. Häufig verwendete Methoden sind nur auf den Nachweis von spezifischen Nicht-Methan-Kohlenwasserstoffverbindungen fokussiert. Die Summe dieser Einzelverbindungen könnte gegebenenfalls aber nur eine Untergrenze an atmosphärischen organischen Kohlenstoffkonzentrationen darstellen, da die verfügbaren Methoden nicht in der Lage sind, alle organischen Verbindungen in der Atmosphäre zu analysieren. Einige Studien sind bekannt, die sich mit der Gesamtkohlenstoffbestimmung von Nicht-Methan-Kohlenwasserstoffverbindung in Luft beschäftigt haben, aber Messungen des gesamten organischen Nicht-Methan-Verbindungsaustauschs zwischen Vegetation und Atmosphäre fehlen. Daher untersuchten wir die Gesamtkohlenstoffbestimmung organische Nicht-Methan-Verbindungen aus biogenen Quellen. Die Bestimmung des organischen Gesamtkohlenstoffs wurde durch Sammeln und Anreichern dieser Verbindungen auf einem festen Adsorptionsmaterial realisiert. Dieser erste Schritt war notwendig, um die stabilen Gase CO, CO2 und CH4 von der organischen Kohlenstofffraktion zu trennen. Die organischen Verbindungen wurden thermisch desorbiert und zu CO2 oxidiert. Das aus der Oxidation entstandene CO2 wurde auf einer weiteren Anreicherungseinheit gesammelt und durch thermische Desorption und anschließende Detektion mit einem Infrarot-Gasanalysator analysiert. Als große Schwierigkeiten identifizierten wir (i) die Abtrennung von CO2 aus der Umgebungsluft von der organischen Kohlenstoffverbindungsfaktion während der Anreicherung sowie (ii) die Widerfindungsraten der verschiedenen Nicht-Methan-Kohlenwasserstoff-verbindungen vom Adsorptionsmaterial, (iii) die Wahl des Katalysators sowie (iiii) auftretende Interferenzen am Detektor des Gesamtkohlenstoffanalysators. Die Wahl eines Pt-Rd Drahts als Katalysator führte zu einem bedeutenden Fortschritt in Bezug auf die korrekte Ermittlung des CO2-Hintergrund-Signals. Dies war notwendig, da CO2 auch in geringen Mengen auf der Adsorptionseinheit während der Anreicherung der leichtflüchtigen organischen Substanzen gesammelt wurde. Katalytische Materialien mit hohen Oberflächen stellten sich als unbrauchbar für diese Anwendung heraus, weil trotz hoher Temperaturen eine CO2-Aufnahme und eine spätere Abgabe durch das Katalysatormaterial beobachtet werden konnte. Die Methode wurde mit verschiedenen leichtflüchtigen organischen Einzelsubstanzen sowie in zwei Pflanzenkammer-Experimenten mit einer Auswahl an VOC-Spezies getestet, die von unterschiedlichen Pflanzen emittiert wurden. Die Pflanzenkammer-messungen wurden durch GC-MS und PTR-MS Messungen begleitet. Außerdem wurden Kalibrationstests mit verschiedenen Einzelsubstanzen aus Permeations-/Diffusionsquellen durchgeführt. Der Gesamtkohlenstoffanalysator konnte den tageszeitlichen Verlauf der Pflanzenemissionen bestätigen. Allerdings konnten Abweichungen für die Mischungsverhältnisse des organischen Gesamtkohlenstoffs von bis zu 50% im Vergleich zu den begleitenden Standardmethoden beobachtet werden.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
21 cm cosmology opens an observational window to previously unexplored cosmological epochs such as the Epoch of Reionization (EoR), the Cosmic Dawn and the Dark Ages using powerful radio interferometers such as the planned Square Kilometer Array (SKA). Among all the other applications which can potentially improve the understanding of standard cosmology, we study the promising opportunity given by measuring the weak gravitational lensing sourced by 21 cm radiation. We performed this study in two different cosmological epochs, at a typical EoR redshift and successively at a post-EoR redshift. We will show how the lensing signal can be reconstructed using a three dimensional optimal quadratic lensing estimator in Fourier space, using single frequency band or combining multiple frequency band measurements. To this purpose, we implemented a simulation pipeline capable of dealing with issues that can not be treated analytically. Considering the current SKA plans, we studied the performance of the quadratic estimator at typical EoR redshifts, for different survey strategies and comparing two thermal noise models for the SKA-Low array. The simulation we performed takes into account the beam of the telescope and the discreteness of visibility measurements. We found that an SKA-Low interferometer should obtain high-fidelity images of the underlying mass distribution in its phase 1 only if several bands are stacked together, covering a redshift range that goes from z=7 to z=11.5. The SKA-Low phase 2, modeled in order to improve the sensitivity of the instrument by almost an order of magnitude, should be capable of providing images with good quality even when the signal is detected within a single frequency band. Considering also the serious effect that foregrounds could have on this detections, we discussed the limits of these results and also the possibility provided by these models of measuring an accurate lensing power spectrum.