901 resultados para radius of curvature measurement


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biologische Membranen sind Fettmolekül-Doppelschichten, die sich wie zweidimensionale Flüssigkeiten verhalten. Die Energie einer solchen fluiden Oberfläche kann häufig mit Hilfe eines Hamiltonians beschrieben werden, der invariant unter Reparametrisierungen der Oberfläche ist und nur von ihrer Geometrie abhängt. Beiträge innerer Freiheitsgrade und der Umgebung können in den Formalismus mit einbezogen werden. Dieser Ansatz wird in der vorliegenden Arbeit dazu verwendet, die Mechanik fluider Membranen und ähnlicher Oberflächen zu untersuchen. Spannungen und Drehmomente in der Oberfläche lassen sich durch kovariante Tensoren ausdrücken. Diese können dann z. B. dazu verwendet werden, die Gleichgewichtsposition der Kontaktlinie zu bestimmen, an der sich zwei aneinander haftende Oberflächen voneinander trennen. Mit Ausnahme von Kapillarphänomenen ist die Oberflächenenergie nicht nur abhängig von Translationen der Kontaktlinie, sondern auch von Änderungen in der Steigung oder sogar Krümmung. Die sich ergebenden Randbedingungen entsprechen den Gleichgewichtsbedingungen an Kräfte und Drehmomente, falls sich die Kontaktlinie frei bewegen kann. Wenn eine der Oberflächen starr ist, muss die Variation lokal dieser Fläche folgen. Spannungen und Drehmomente tragen dann zu einer einzigen Gleichgewichtsbedingung bei; ihre Beiträge können nicht mehr einzeln identifiziert werden. Um quantitative Aussagen über das Verhalten einer fluiden Oberfläche zu machen, müssen ihre elastischen Eigenschaften bekannt sein. Der "Nanotrommel"-Versuchsaufbau ermöglicht es, Membraneigenschaften lokal zu untersuchen: Er besteht aus einer porenüberspannenden Membran, die während des Experiments durch die Spitze eines Rasterkraftmikroskops in die Pore gedrückt wird. Der lineare Verlauf der resultierenden Kraft-Abstands-Kurven kann mit Hilfe der in dieser Arbeit entwickelten Theorie reproduziert werden, wenn der Einfluss von Adhäsion zwischen Spitze und Membran vernachlässigt wird. Bezieht man diesen Effekt in die Rechnungen mit ein, ändert sich das Resultat erheblich: Kraft-Abstands-Kurven sind nicht länger linear, Hysterese und nichtverschwindende Trennkräfte treten auf. Die Voraussagen der Rechnungen könnten in zukünftigen Experimenten dazu verwendet werden, Parameter wie die Biegesteifigkeit der Membran mit einer Auflösung im Nanometerbereich zu bestimmen. Wenn die Materialeigenschaften bekannt sind, können Probleme der Membranmechanik genauer betrachtet werden. Oberflächenvermittelte Wechselwirkungen sind in diesem Zusammenhang ein interessantes Beispiel. Mit Hilfe des oben erwähnten Spannungstensors können analytische Ausdrücke für die krümmungsvermittelte Kraft zwischen zwei Teilchen, die z. B. Proteine repräsentieren, hergeleitet werden. Zusätzlich wird das Gleichgewicht der Kräfte und Drehmomente genutzt, um mehrere Bedingungen an die Geometrie der Membran abzuleiten. Für den Fall zweier unendlich langer Zylinder auf der Membran werden diese Bedingungen zusammen mit Profilberechnungen kombiniert, um quantitative Aussagen über die Wechselwirkung zu treffen. Theorie und Experiment stoßen an ihre Grenzen, wenn es darum geht, die Relevanz von krümmungsvermittelten Wechselwirkungen in der biologischen Zelle korrekt zu beurteilen. In einem solchen Fall bieten Computersimulationen einen alternativen Ansatz: Die hier präsentierten Simulationen sagen voraus, dass Proteine zusammenfinden und Membranbläschen (Vesikel) bilden können, sobald jedes der Proteine eine Mindestkrümmung in der Membran induziert. Der Radius der Vesikel hängt dabei stark von der lokal aufgeprägten Krümmung ab. Das Resultat der Simulationen wird in dieser Arbeit durch ein approximatives theoretisches Modell qualitativ bestätigt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Volatile organic compounds play a critical role in ozone formation and drive the chemistry of the atmosphere, together with OH radicals. The simplest volatile organic compound methane is a climatologically important greenhouse gas, and plays a key role in regulating water vapour in the stratosphere and hydroxyl radicals in the troposphere. The OH radical is the most important atmospheric oxidant and knowledge of the atmospheric OH sink, together with the OH source and ambient OH concentrations is essential for understanding the oxidative capacity of the atmosphere. Oceanic emission and / or uptake of methanol, acetone, acetaldehyde, isoprene and dimethyl sulphide (DMS) was characterized as a function of photosynthetically active radiation (PAR) and a suite of biological parameters, in a mesocosm experiment conducted in the Norwegian fjord. High frequency (ca. 1 minute-1) methane measurements were performed using a gas chromatograph - flame ionization detector (GC-FID) in the boreal forests of Finland and the tropical forests of Suriname. A new on-line method (Comparative Reactivity Method - CRM) was developed to directly measure the total OH reactivity (sink) of ambient air. It was observed that under conditions of high biological activity and a PAR of ~ 450 μmol photons m-2 s-1, the ocean acted as a net source of acetone. However, if either of these criteria was not fulfilled then the ocean acted as a net sink of acetone. This new insight into the biogeochemical cycling of acetone at the ocean-air interface has helped to resolve discrepancies from earlier works such as Jacob et al. (2002) who reported the ocean to be a net acetone source (27 Tg yr-1) and Marandino et al. (2005) who reported the ocean to be a net sink of acetone (- 48 Tg yr-1). The ocean acted as net source of isoprene, DMS and acetaldehyde but net sink of methanol. Based on these findings, it is recommended that compound specific PAR and biological dependency be used for estimating the influence of the global ocean on atmospheric VOC budgets. Methane was observed to accumulate within the nocturnal boundary layer, clearly indicating emissions from the forest ecosystems. There was a remarkable similarity in the time series of the boreal and tropical forest ecosystem. The average of the median mixing ratios during a typical diel cycle were 1.83 μmol mol-1 and 1.74 μmol mol-1 for the boreal forest ecosystem and tropical forest ecosystem respectively. A flux value of (3.62 ± 0.87) x 1011 molecules cm-2 s-1 (or 45.5 ± 11 Tg CH4 yr-1 for global boreal forest area) was derived, which highlights the importance of the boreal forest ecosystem for the global budget of methane (~ 600 Tg yr-1). The newly developed CRM technique has a dynamic range of ~ 4 s-1 to 300 s-1 and accuracy of ± 25 %. The system has been tested and calibrated with several single and mixed hydrocarbon standards showing excellent linearity and accountability with the reactivity of the standards. Field tests at an urban and forest site illustrate the promise of the new method. The results from this study have improved current understanding about VOC emissions and uptake from ocean and forest ecosystems. Moreover, a new technique for directly measuring the total OH reactivity of ambient air has been developed and validated, which will be a valuable addition to the existing suite of atmospheric measurement techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis was an experimental test of an effective theory of strong interactions at low energy, called Chiral Perturbation Theory (ChPT). Weak decays of kaon mesons provide such a test. In particular, K± → π±γγ decays are interesting because there is no tree-level O(p2) contribution in ChPT, and the leading contributions start at O(p4). At this order, these decays include one undetermined coupling constant, ĉ. Both the branching ratio and the spectrum shape of K± → π±γγ decays are sensitive to this parameter. O(p6) contributions to K± → π±γγ ChPT predict a 30-40% increase in the branching ratio. From the measurement of the branching ratio and spectrum shape of K± → π±γγ decays, it is possible to determine a model dependent value of ĉ and also to examine whether the O(p6) corrections are necessary and enough to explain the rate.About 40% of the data collected in the year 2003 by the NA48/2 experiment have been analyzed and 908 K± → π±γγ candidates with about 8% background contamination have been selected in the region with z = mγγ2/mK2 ≥ 0.2. Using 5,750,121 selected K± → π±π0 decays as normalization channel, a model independent differential branching ratio of K± → π±γγ has been measured to be:BR(K± → π±γγ, z ≥ 0.2) = (1.018 ± 0.038stat ± 0.039syst ± 0.004ext) ∙10-6. From the fit to the O(p6) ChPT prediction of the measured branching ratio and the shape of the z-spectrum, a value of ĉ = 1.54 ± 0.15stat ± 0.18syst has been extracted. Using the measured ĉ value and the O(p6) ChPT prediction, the branching ratio for z =mγγ2/mK2 <0.2 was computed and added to the measured result. The value obtained for the total branching ratio is:BR(K± → π±γγ) = (1.055 ± 0.038stat ± 0.039syst ± 0.004ext + 0.003ĉ -0.002ĉ) ∙10-6, where the last error reflects the uncertainty on ĉ.The branching ratio result presented here agrees with previous experimental results, improving the precision of the measurement by at least a factor of five. The precision on the ĉ measurement has been improved by approximately a factor of three. A slight disagreement with the O(p6) ChPT branching ratio prediction as a function of ĉ has been observed. This mightrnbe due to the possible existence of non-negligible terms not yet included in the theory. Within the scope of this thesis, η-η' mixing effects in O(p4) ChPT have also been measured.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La massa del quark top è qui misurata per mezzo dei dati raccolti dall’esperimento CMS in collisioni protone-protone ad LHC, con energia nel centro di massa pari ad 8 TeV. Il campione di dati raccolto corrisponde ad una luminosità integrata pari a 18.2 /fb. La misura è effettuata su eventi con un numero di jet almeno pari a 6, di cui almeno due b-taggati (ovvero identificati come prodotto dell’adronizzazione di due quark bottom). Il valore di massa trovato è di (173.95 +- 0.43 (stat)) GeV/c2, in accordo con la media mondiale. The top quark mass is here measured by using the data that have been collected with the CMS experiment in proton-proton collisions at the LHC, at a center-of-mass energy of 8 TeV. The dataset which was used, corresponds to an integrated luminosiy of 18.2 /fb. The mass measurement is carried out by using events characterized by six or more jets, two of which identified as being originated by the hadronization of bottom quarks. The result of the measurement of the top quark mass performed here is: (173.95 +- 0.43 (stat)) GeV/c2, in accordance with the recently published world average.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study assessed the effects of abrasion, salivary proteins, and measurement angle on the quantification of early dental erosion by the analysis of reflection intensities from enamel. Enamel from 184 caries-free human molars was used for in vitro erosion in citric acid (pH 3.6). Abrasion of the eroded enamel resulted in a 6% to 14% increase in the specular reflection intensity compared to only eroded enamel, and the reflection increase depended on the erosion degree. Nevertheless, monitoring of early erosion by reflection analysis was possible even in the abraded eroded teeth. The presence of the salivary pellicle induced up to 22% higher reflection intensities due to the smoothing of the eroded enamel by the adhered proteins. However, this measurement artifact could be significantly minimized (p<0.05) by removing the pellicle layer with 3% NaOCl solution. Change of the measurement angles from 45 to 60 deg did not improve the sensitivity of the analysis at late erosion stages. The applicability of the method for monitoring the remineralization of eroded enamel remained unclear in a demineralization/remineralization cycling model of early dental erosion in vitro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this report we will investigate the effect of negative energy density in a classic Friedmann cosmology. Although never measured and possibly unphysical, the evolution of a Universe containing a significant cosmological abundance of any of a number of hypothetical stable negative energy components is explored. These negative energy (Ω < 0) forms include negative phantom energy (w<-1), negative cosmological constant (w=-1), negative domain walls (w=-2/3), negative cosmic strings (w= -1/3), negative mass (w=0), negative radiation (w=1/3), and negative ultra-light (w > 1/3). Assuming that such universe components generate pressures as perfect fluids, the attractive or repulsive nature of each negative energy component is reviewed. The Friedmann equations can only be balanced when negative energies are coupled to a greater magnitude of positive energy or positive curvature, and minimal cases of both of these are reviewed. The future and fate of such universes in terms of curvature, temperature, acceleration, and energy density are reviewed including endings categorized as a Big Crunch, Big Void, or Big Rip and further qualified as "Warped", "Curved", or "Flat", "Hot" versus "Cold", "Accelerating" versus" Decelerating" versus "Coasting". A universe that ends by contracting to zero energy density is termed a Big Poof. Which contracting universes ``bounce" in expansion and which expanding universes ``turnover" into contraction are also reviewed. The name by which the ending of the Universe is mentioned is our own nomenclature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-uniformity of steps within a flight is a major risk factor for falls. Guidelines and requirements for uniformity of step risers and tread depths assume the measurement system provides precise dimensional values. The state-of-the-art measurement system is a relatively new method, known as the nosing-to-nosing method. It involves measuring the distance between the noses of adjacent steps and the angle formed with the horizontal. From these measurements, the effective riser height and tread depth are calculated. This study was undertaken for the purpose of evaluating the measurement system to determine how much of total measurement variability comes from the step variations versus that due to repeatability and reproducibility (R&R) associated with the measurers. Using an experimental design quality control professionals call a measurement system experiment, two measurers measured all steps in six randomly selected flights, and repeated the process on a subsequent day. After marking each step in a flight in three lateral places (left, center, and right), the measurers took their measurement. This process yielded 774 values of riser height and 672 values of tread depth. Results of applying the Gage R&R ANOVA procedure in Minitab software indicated that the R&R contribution to riser height variability was 1.42%; and to tread depth was 0.50%. All remaining variability was attributed to actual step-to-step differences. These results may be compared with guidelines used in the automobile industry for measurement systems that consider R&R less than 1% as an acceptable measurement system; and R&R between 1% and 9% as acceptable depending on the application, the cost of the measuring device, cost of repair, or other factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE The purpose of this study was to identify morphologic factors affecting type I endoleak formation and bird-beak configuration after thoracic endovascular aortic repair (TEVAR). METHODS Computed tomography (CT) data of 57 patients (40 males; median age, 66 years) undergoing TEVAR for thoracic aortic aneurysm (34 TAA, 19 TAAA) or penetrating aortic ulcer (n = 4) between 2001 and 2010 were retrospectively reviewed. In 28 patients, the Gore TAG® stent-graft was used, followed by the Medtronic Valiant® in 16 cases, the Medtronic Talent® in 8, and the Cook Zenith® in 5 cases. Proximal landing zone (PLZ) was in zone 1 in 13, zone 2 in 13, zone 3 in 23, and zone 4 in 8 patients. In 14 patients (25%), the procedure was urgent or emergent. In each case, pre- and postoperative CT angiography was analyzed using a dedicated image processing workstation and complimentary in-house developed software based on a 3D cylindrical intensity model to calculate aortic arch angulation and conicity of the landing zones (LZ). RESULTS Primary type Ia endoleak rate was 12% (7/57) and subsequent re-intervention rate was 86% (6/7). Left subclavian artery (LSA) coverage (p = 0.036) and conicity of the PLZ (5.9 vs. 2.6 mm; p = 0.016) were significantly associated with an increased type Ia endoleak rate. Bird-beak configuration was observed in 16 patients (28%) and was associated with a smaller radius of the aortic arch curvature (42 vs. 65 mm; p = 0.049). Type Ia endoleak was not associated with a bird-beak configuration (p = 0.388). Primary type Ib endoleak rate was 7% (4/57) and subsequent re-intervention rate was 100%. Conicity of the distal LZ was associated with an increased type Ib endoleak rate (8.3 vs. 2.6 mm; p = 0.038). CONCLUSIONS CT-based 3D aortic morphometry helps to identify risk factors of type I endoleak formation and bird-beak configuration during TEVAR. These factors were LSA coverage and conicity within the landing zones for type I endoleak formation and steep aortic angulation for bird-beak configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring the ratio of heterophils and lymphocytes (H/L) in response to different stressors is a standard tool for assessing long-term stress in laying hens but detailed information on the reliability of measurements, measurement techniques and methods, and absolute cell counts is often lacking. Laying hens offered different sites of the nest boxes at different ages were compared in a two-treatment crossover experiment to provide detailed information on the procedure for measuring and the difficulties in the interpretation of H/L ratios in commercial conditions. H/L ratios were pen-specific and depended on the age and aviary system. There was no effect for the position of the nest. Heterophiles and lymphocytes were not correlated within individuals. Absolute cell counts differed in the number of heterophiles and lymphocytes and H/L ratios, whereas absolute leucocyte counts between individuals were similar. The reliability of the method using relative cell counts was good, yielding a correlation coefficient between double counts of r > 0.9. It was concluded that population-based reference values may not be sensitive enough to detect individual stress reactions and that the H/L ratio as an indicator of stress under commercial conditions may not be useful because of confounding factors and that other, non-invasive, measurements should be adopted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Upwardpropagation of a premixed flame in averticaltubefilled with a very leanmixture is simulated numerically using a single irreversible Arrhenius reaction model with infinitely high activation energy. In the absence of heat losses and preferential diffusion effects, a curved flame with stationary shape and velocity close to those of an open bubble ascending in the same tube is found for values of the fuel mass fraction above a certain minimum that increases with the radius of the tube, while the numerical computations cease to converge to a stationary solution below this minimum mass fraction. The vortical flow of the gas behind the flame and in its transport region is described for tubes of different radii. It is argued that this flow may become unstable when the fuel mass fraction is decreased, and that this instability, together with the flame stretch due to the strong curvature of the flame tip in narrow tubes, may be responsible for the minimum fuel mass fraction. Radiation losses and a Lewis number of the fuel slightly above unity decrease the final combustion temperature at the flame tip and increase the minimum fuel mass fraction, while a Lewis number slightly below unity has the opposite effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method for measuring the linewidth enhancement factor (α-parameter) of semiconductor lasers is proposed and discussed. The method itself provides an estimation of the measurement error, thus self-validating the entire procedure. The α-parameter is obtained from the temporal profile and the instantaneous frequency (chirp) of the pulses generated by gain switching. The time resolved chirp is measured with a polarization based optical differentiator. The accuracy of the obtained values of the α-parameter is estimated from the comparison between the directly measured pulse spectrum and the spectrum reconstructed from the chirp and the temporal profile of the pulse. The method is applied to a VCSEL and to a DFB laser emitting around 1550 nm at different temperatures, obtaining a measurement error lower than ± 8%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The verification of compliance with a design specification in manufacturing requires the use of metrological instruments to check if the magnitude associated with the design specification is or not according with tolerance range. Such instrumentation and their use during the measurement process, has associated an uncertainty of measurement whose value must be related to the value of tolerance tested. Most papers dealing jointly tolerance and measurement uncertainties are mainly focused on the establishment of a relationship uncertainty-tolerance without paying much attention to the impact from the standpoint of process cost. This paper analyzes the cost-measurement uncertainty, considering uncertainty as a productive factor in the process outcome. This is done starting from a cost-tolerance model associated with the process. By means of this model the existence of a measurement uncertainty is calculated in quantitative terms of cost and its impact on the process is analyzed.