939 resultados para measurement error models
Resumo:
Il fenomeno dello scattering diffuso è stato oggetto di numerosi studi nell’arco degli ultimi anni, questo grazie alla sua rilevanza nell’ambito della propagazione elettromagnetica così come in molti altri campi di applicazione (remote sensing, ottica, fisica, etc.), ma la compresione completa di questo effetto è lungi dall’essere raggiunta. Infatti la complessità nello studio e nella caratterizzazione della diffusione deriva dalla miriade di casistiche ed effetti che si possono incontrare in un ambiente di propagazione reale, lasciando intuire la necessità di trattarne probabilisticamente il relativo contributo. Da qui nasce l’esigenza di avere applicazioni efficienti dal punto di vista ingegneristico che coniughino la definizione rigorosa del fenomeno e la conseguente semplificazione per fini pratici. In tale visione possiamo descrivere lo scattering diffuso come la sovrapposizione di tutti quegli effetti che si scostano dalle classiche leggi dell’ottica geometrica (riflessione, rifrazione e diffrazione) che generano contributi del campo anche in punti dello spazio e direzioni in cui teoricamente, per oggetti lisci ed omogenei, non dovrebbe esserci alcun apporto. Dunque l’effetto principale, nel caso di ambiente di propagazione reale, è la diversa distribuzione spaziale del campo rispetto al caso teorico di superficie liscia ed omogenea in congiunzione ad effetti di depolarizzazione e redistribuzione di energia nel bilancio di potenza. Perciò la complessità del fenomeno è evidente e l’obiettivo di tale elaborato è di proporre nuovi risultati che permettano di meglio descrivere lo scattering diffuso ed individuare altresì le tematiche sulle quali concentrare l’attenzione nei lavori futuri. In principio è stato quindi effettuato uno studio bibliografico così da identificare i modelli e le teorie esistenti individuando i punti sui quali riflettere maggiormente; nel contempo si sono analizzate le metodologie di caratterizzazione della permittività elettrica complessa dei materiali, questo per valutare la possibilità di ricavare i parametri da utilizzare nelle simulazioni utilizzando il medesimo setup di misura ideato per lo studio della diffusione. Successivamente si è realizzato un setup di simulazione grazie ad un software di calcolo elettromagnetico (basato sul metodo delle differenze finite nel dominio del tempo) grazie al quale è stato possibile analizzare la dispersione tridimensionale dovuta alle irregolarità del materiale. Infine è stata condotta una campagna di misure in camera anecoica con un banco sperimentale realizzato ad-hoc per effettuare una caratterizzazione del fenomeno di scattering in banda larga.
Resumo:
The goal of this thesis was an experimental test of an effective theory of strong interactions at low energy, called Chiral Perturbation Theory (ChPT). Weak decays of kaon mesons provide such a test. In particular, K± → π±γγ decays are interesting because there is no tree-level O(p2) contribution in ChPT, and the leading contributions start at O(p4). At this order, these decays include one undetermined coupling constant, ĉ. Both the branching ratio and the spectrum shape of K± → π±γγ decays are sensitive to this parameter. O(p6) contributions to K± → π±γγ ChPT predict a 30-40% increase in the branching ratio. From the measurement of the branching ratio and spectrum shape of K± → π±γγ decays, it is possible to determine a model dependent value of ĉ and also to examine whether the O(p6) corrections are necessary and enough to explain the rate.About 40% of the data collected in the year 2003 by the NA48/2 experiment have been analyzed and 908 K± → π±γγ candidates with about 8% background contamination have been selected in the region with z = mγγ2/mK2 ≥ 0.2. Using 5,750,121 selected K± → π±π0 decays as normalization channel, a model independent differential branching ratio of K± → π±γγ has been measured to be:BR(K± → π±γγ, z ≥ 0.2) = (1.018 ± 0.038stat ± 0.039syst ± 0.004ext) ∙10-6. From the fit to the O(p6) ChPT prediction of the measured branching ratio and the shape of the z-spectrum, a value of ĉ = 1.54 ± 0.15stat ± 0.18syst has been extracted. Using the measured ĉ value and the O(p6) ChPT prediction, the branching ratio for z =mγγ2/mK2 <0.2 was computed and added to the measured result. The value obtained for the total branching ratio is:BR(K± → π±γγ) = (1.055 ± 0.038stat ± 0.039syst ± 0.004ext + 0.003ĉ -0.002ĉ) ∙10-6, where the last error reflects the uncertainty on ĉ.The branching ratio result presented here agrees with previous experimental results, improving the precision of the measurement by at least a factor of five. The precision on the ĉ measurement has been improved by approximately a factor of three. A slight disagreement with the O(p6) ChPT branching ratio prediction as a function of ĉ has been observed. This mightrnbe due to the possible existence of non-negligible terms not yet included in the theory. Within the scope of this thesis, η-η' mixing effects in O(p4) ChPT have also been measured.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.
Resumo:
The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
We have used kinematic models in two Italian regions to reproduce surface interseismic velocities obtained from InSAR and GPS measurements. We have considered a Block modeling, BM, approach to evaluate which fault system is actively accommodating the occurring deformation in both considered areas. We have performed a study for the Umbria-Marche Apennines, obtaining that the tectonic extension observed by GPS measurements is explained by the active contribution of at least two fault systems, one of which is the Alto Tiberina fault, ATF. We have estimated also the interseismic coupling distribution for the ATF using a 3D surface and the result shows an interesting correlation between the microseismicity and the uncoupled fault portions. The second area analyzed concerns the Gargano promontory for which we have used jointly the available InSAR and GPS velocities. Firstly we have attached the two datasets to the same terrestrial reference frame and then using a simple dislocation approach, we have estimated the best fault parameters reproducing the available data, providing a solution corresponding to the Mattinata fault. Subsequently we have considered within a BM analysis both GPS and InSAR datasets in order to evaluate if the Mattinata fault may accommodate the deformation occurring in the central Adriatic due to the relative motion between the North-Adriatic and South-Adriatic plates. We obtain that the deformation occurring in that region should be accommodated by more that one fault system, that is however difficult to detect since the poor coverage of geodetic measurement offshore of the Gargano promontory. Finally we have performed also the estimate of the interseismic coupling distribution for the Mattinata fault, obtaining a shallow coupling pattern. Both of coupling distributions found using the BM approach have been tested by means of resolution checkerboard tests and they demonstrate that the coupling patterns depend on the geodetic data positions.
Resumo:
Precision measurements of observables in neutron beta decay address important open questions of particle physics and cosmology. In this thesis, a measurement of the proton recoil spectrum with the spectrometer aSPECT is described. From this spectrum the antineutrino-electron angular correlation coefficient a can be derived. In our first beam time at the FRM II in Munich, background instabilities prevented us from presenting a new value for a. In the latest beam time at the ILL in Grenoble, the background has been reduced sufficiently. As a result of the data analysis, we identified and fixed a problem in the detector electronics which caused a significant systematic error. The aim of the latest beam time was a new value for a with an error well below the present literature value of 4%. A statistical accuracy of about 1.4% was reached, but we could only set upper limits on the correction of the problem in the detector electronics, too high to determine a meaningful result. This thesis focused on the investigation of different systematic effects. With the knowledge of the systematics gained in this thesis, we are able to improve aSPECT to perform a 1% measurement of a in a further beam time.
Resumo:
The new stage of the Mainz Microtron, MAMI, at the Institute for Nuclear Physics of the Johannes Gutenberg-University, operational since 2007, allows open strangeness experiments to be performed. Covering the lack of electroproduction data at very low Q2, p(e,K+)Lambda and p(e,K+)Sigma0, reactions have been studied at Q^2 = 0.036(GeV/c)^2 andrnQ^2 = 0.05(GeV=c)^2 in a large angular range. Cross-section at W=1.75rnGeV will be given in angular bins and compared with the predictions of Saclay-Lyon and Kaon Maid isobaric models. We conclude that the original Kaon-Maid model, which has large longitudinal couplings of the photon to nucleon resonances, is unphysical. Extensive studies for the suitability of silicon photomultipliers as read out devices for a scintillating fiber tracking detector, with potential applications in both positive and negative arms of the spectrometer, will be presented as well.
Resumo:
OBJECTIVE To determine the practicability and accuracy of central corneal thickness (CCT) measurements in living chicks utilizing a noncontact, high-speed optical low-coherence reflectometer (OLCR) mounted on a slit lamp. ANIMALS STUDIED Twelve male chicks (Gallus gallus domesticus). Procedures Measurements of CCT were obtained in triplicate in 24 eyes of twelve 1-day-old anaesthetized chicks using OLCR. Every single measurement taken by OLCR consisted of the average result of 20 scans obtained within seconds. Additionally, corneal thickness was determined histologically after immersion fixation in Karnovsky's solution alone (20 eyes) or with a previous injection of the fixative into the anterior chamber before enucleation (4 eyes). RESULTS Central corneal thickness measurements using OLCR in 1-day-old living chicks provide a rapid and feasible examination technique. Mean CCT measured with OLCR (189.7 ± 3.34 μm) was significantly lower than histological measurements (242.1 ± 47.27 μm) in eyes with fixation in Karnovsky's solution (P = 0.0005). In eyes with additional injection of Karnovsky's fixative into the anterior chamber, mean histologically determined CCT was 195.2 ± 8.25 μm vs. 191.9 ± 8.90 μm with OLCR. A trend for a lower variance was found compared to the eyes that had only been immersion fixed. CONCLUSION Optical low-coherence reflectometry is an accurate examination technique to measure in vivo CCT in the eye of newborn chicks. The knowledge of the thickness of the chick cornea and the ability to obtain noninvasive, noncontact measurements of CCT in the living animal may be of interest for research and development of eye diseases in chick models.
Resumo:
This paper summarises the discussions which took place at the Workshop on Methodology in Erosion Research in Zürich, 2010, and aims, where possible, to offer guidance for the development and application of both in vitro and in situ models for erosion research. The prospects for clinical trials are also discussed. All models in erosion research require a number of choices regarding experimental conditions, study design and measurement techniques, and these general aspects are discussed first. Among in vitro models, simple (single- or multiple-exposure) models can be used for screening products regarding their erosive potential, while more elaborate pH cycling models can be used to simulate erosion in vivo. However, in vitro models provide limited information on intra-oral erosion. In situ models allow the effect of an erosive challenge to be evaluated under intra-oral conditions and are currently the method of choice for short-term testing of low-erosive products or preventive therapeutic products. In the future, clinical trials will allow longer-term testing. Possible methodologies for such trials are discussed.
Resumo:
Complete basis set and Gaussian-n methods were combined with Barone and Cossi's implementation of the polarizable conductor model (CPCM) continuum solvation methods to calculate pKa values for six carboxylic acids. Four different thermodynamic cycles were considered in this work. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol, to calculate pKa values with cycle 1. The complete basis set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. Thermodynamic cycles that include an explicit water in the cycle are not accurate when the free energy of solvation of a water molecule is used, but appear to become accurate when the experimental free energy of vaporization of water is used. This apparent improvement is an artifact of the standard state used in the calculation. Geometry relaxation in solution does not improve the results when using these later cycles. The use of cycle 1 and the complete basis set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem, 2001
Resumo:
Complete Basis Set and Gaussian-n methods were combined with CPCM continuum solvation methods to calculate pKa values for six carboxylic acids. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol to calculate pKa values with Cycle 1. The Complete Basis Set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. The use of Cycle 1 and the Complete Basis Set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit.
Resumo:
The subject of this study is to investigate the capability of spaceborne remote sensing data to predict ground concentrations of PM10 over the European Alpine region using satellite derived Aerosol Optical Depth (AOD) from the geostationary Spinning Enhanced Visible and InfraRed Imager (SEVIRI) and the polar-orbiting MODerate resolution Imaging Spectroradiometer (MODIS). The spatial and temporal resolutions of these aerosol products (10 km and 2 measurements per day for MODIS, ∼ 25 km and observation intervals of 15 min for SEVIRI) permit an evaluation of PM estimation from space at different spatial and temporal scales. Different empirical linear relationships between coincident AOD and PM10 observations are evaluated at 13 ground-based PM measurement sites, with the assumption that aerosols are vertically homogeneously distributed below the planetary Boundary Layer Height (BLH). The BLH and Relative Humidity (RH) variability are assessed, as well as their impact on the parameterization. The BLH has a strong influence on the correlation of daily and hourly time series, whilst RH effects are less clear and smaller in magnitude. Despite its lower spatial resolution and AOD accuracy, SEVIRI shows higher correlations than MODIS (rSEV∼ 0.7, rMOD∼ 0.6) with regard to daily averaged PM10. Advantages from MODIS arise only at hourly time scales in mountainous locations but lower correlations were found for both sensors at this time scale (r∼ 0.45). Moreover, the fraction of days in 2008 with at least one satellite observation was 27% for SEVIRI and 17% for MODIS. These results suggest that the frequency of observations plays an important role in PM monitoring, while higher spatial resolution does not generally improve the PM estimation. Ground-based Sun Photometer (SP) measurements are used to validate the satellite-based AOD in the study region and to discuss the impact of aerosols' micro-physical properties in the empirical models. A lower error limit of 30 to 60% in the PM10 assessment from space is estimated in the study area as a result of AOD uncertainties, variability of aerosols properties and the heterogeneity of ground measurement sites. It is concluded that SEVIRI has a similar capacity to map PM as sensors on board polar-orbiting platforms, with the advantage of a higher number of observations. However, the accuracy represents a serious limitation to the applicability of satellites for ground PM mapping, especially in mountainous areas.
Resumo:
To enhance understanding of the metabolic indicators of type 2 diabetes mellitus (T2DM) disease pathogenesis and progression, the urinary metabolomes of well characterized rhesus macaques (normal or spontaneously and naturally diabetic) were examined. High-resolution ultra-performance liquid chromatography coupled with the accurate mass determination of time-of-flight mass spectrometry was used to analyze spot urine samples from normal (n = 10) and T2DM (n = 11) male monkeys. The machine-learning algorithm random forests classified urine samples as either from normal or T2DM monkeys. The metabolites important for developing the classifier were further examined for their biological significance. Random forests models had a misclassification error of less than 5%. Metabolites were identified based on accurate masses (<10 ppm) and confirmed by tandem mass spectrometry of authentic compounds. Urinary compounds significantly increased (p < 0.05) in the T2DM when compared with the normal group included glycine betaine (9-fold), citric acid (2.8-fold), kynurenic acid (1.8-fold), glucose (68-fold), and pipecolic acid (6.5-fold). When compared with the conventional definition of T2DM, the metabolites were also useful in defining the T2DM condition, and the urinary elevations in glycine betaine and pipecolic acid (as well as proline) indicated defective re-absorption in the kidney proximal tubules by SLC6A20, a Na(+)-dependent transporter. The mRNA levels of SLC6A20 were significantly reduced in the kidneys of monkeys with T2DM. These observations were validated in the db/db mouse model of T2DM. This study provides convincing evidence of the power of metabolomics for identifying functional changes at many levels in the omics pipeline.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.