984 resultados para physical parameters


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Factorization Method localizes inclusions inside a body from measurements on its surface. Without a priori knowing the physical parameters inside the inclusions, the points belonging to them can be characterized using the range of an auxiliary operator. The method relies on a range characterization that relates the range of the auxiliary operator to the measurements and is only known for very particular applications. In this work we develop a general framework for the method by considering symmetric and coercive operators between abstract Hilbert spaces. We show that the important range characterization holds if the difference between the inclusions and the background medium satisfies a coerciveness condition which can immediately be translated into a condition on the coefficients of a given real elliptic problem. We demonstrate how several known applications of the Factorization Method are covered by our general results and deduce the range characterization for a new example in linear elasticity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A year of satellite-borne lidar CALIOP data is analyzed and statistics on occurrence and distribution of bulk properties of cirri are provided. The relationship between environmental and cloud physical parameters and the shape of the backscatter profile (BSP) is investigated. It is found that CALIOP BSP is mainly affected by cloud geometrical thickness while only minor impacts can be attributed to other quantities such as optical depth or temperature. To fit mean BSPs as functions of geometrical thickness and position within the cloud layer, polynomial functions are provided. It is demonstrated that, under realistic hypotheses, the mean BSP is linearly proportional to the IWC profile. The IWC parameterization is included into the RT-RET retrieval algorithm, that is exploited to analyze infrared radiance measurements in presence of cirrus clouds during the ECOWAR field campaign. Retrieved microphysical and optical properties of the observed cloud are used as input parameters in a forward RT simulation run over the 100-1100 cm-1 spectral interval and compared with interferometric data to test the ability of the current single scattering properties database of ice crystal to reproduce realistic optical features. Finally a global scale investigation of cirrus clouds is performed by developing a collocation algorithm that exploits satellite data from multiple sensors (AIRS, CALIOP, MODIS). The resulting data set is utilized to test a new infrared hyperspectral retrieval algorithm. Retrieval products are compared to data and in particular the cloud top height (CTH) product is considered for this purpose. A better agreement of the retrieval with the CALIOP CTH than MODIS is found, even if some cases of underestimation and overestimation are observed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Analyse tandem-repetitiver DNA-Sequenzen hat einen festen Platz als genetisches Typisierungsverfahren in den Breichen der stammesgeschichtlichen Untersuchung, der Verwandtschaftsanalyse und vor allem in der forensischen Spurenkunde, bei der es durch den Einsatz der Multiplex-PCR-Analyse von Short Tandem Repeat-Systemen (STR) zu einem Durchbruch bei der Aufklärung und sicheren Zuordnung von biologischen Tatortspuren kam. Bei der Sequenzierung des humanen Genoms liegt ein besonderes Augenmerk auf den genetisch polymorphen Sequenzvariationen im Genom, den SNPs (single nucleotide polymorphisms). Zwei ihrer Eigenschaften – das häufige Vorkommen innerhalb des humanen Genoms und ihre vergleichbar geringe Mutationsrate – machen sie zu besonders gut geeigneten Werkzeugen sowohl für die Forensik als auch für die Populationsgenetik.rnZum Ziel des EU-Projekts „SNPforID“, aus welchem die vorliegende Arbeit entstanden ist, wurde die Etablierung neuer Methoden zur validen Typisierung von SNPs in Multiplexverfahren erklärt. Die Berücksichtigung der Sensitivität bei der Untersuchung von Spuren sowie die statistische Aussagekraft in der forensischen Analyse standen dabei im Vordergrund. Hierfür wurden 52 autosomale SNPs ausgewählt und auf ihre maximale Individualisierungsstärke hin untersucht. Die Untersuchungen der ersten 23 selektierten Marker stellen den ersten Teil der vorliegenden Arbeit dar. Sie umfassen die Etablierung des Multiplexverfahrens und der SNaPshot™-Typisierungsmethode sowie ihre statistische Auswertung. Die Ergebnisse dieser Untersuchung sind ein Teil der darauf folgenden, in enger Zusammenarbeit der Partnerlaboratorien durchgeführten Studie der 52-SNP-Multiplexmethode. rnEbenfalls im Rahmen des Projekts und als Hauptziel der Dissertation erfolgten Etablierung und Evaluierung des auf der Microarray-Technologie basierenden Verfahrens der Einzelbasenverlängerung auf Glasobjektträgern. Ausgehend von einer begrenzten DNA-Menge wurde hierbei die Möglichkeit der simultanen Hybridisierung einer möglichst hohen Anzahl von SNP-Systemen untersucht. Die Auswahl der hierbei eingesetzten SNP-Marker erfolgte auf der Basis der Vorarbeiten, die für die Etablierung des 52-SNP-Multiplexes erfolgreich durchgeführt worden waren. rnAus einer Vielzahl von Methoden zur Genotypisierung von biallelischen Markern hebt sich das Assay in seiner Parallelität und der Einfachheit des experimentellen Ansatzes durch eine erhebliche Zeit- und Kostenersparnis ab. In der vorliegenden Arbeit wurde das „array of arrays“-Prinzip eingesetzt, um zur gleichen Zeit unter einheitlichen Versuchsbedingungen zwölf DNA-Proben auf einem Glasobjektträger zu typisieren. Auf der Basis von insgesamt 1419 typisierten Allelen von 33 Markern konnte die Validierung mit einem Typisierungserfolg von 86,75% abgeschlossen werden. Dabei wurden zusätzlich eine Reihe von Randbedingungen in Bezug auf das Sonden- und Primerdesign, die Hybridisierungsbedingungen sowie physikalische Parameter der laserinduzierten Fluoreszenzmessung der Signale ausgetestet und optimiert. rn

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this thesis is to provide a geochemical characterization of the Seehausen territory (a neighborhood) of Bremen, Germany. In this territory it is hosted a landfill of dredged sediments coming both from Bremerhaven (North See) and Bremen harbor (directly on the river Weser). For this reason this work has been focused also on possible impacts of the landfill on the groundwaters (shallow and deep aquifer). The Seehausen landfill uses the dewatering technique to manage the dredged sediments: incoming sediments are put into dewatering fields until they are completely dried (it takes almost a year). Then they are randomly sampled and analyzed: if the pollutants content is acceptable, sediments are treated with other materials and used instead of raw material for embankment, bricks, etc., otherwise they are disposed in the landfill. During this work it has been made a study of the natural geology and hydrogeology of the whole area of interest, especially because it is characterized by ancient natural salt deposits. Then, together with the Geological Survey of Bremen and the Harbor Authority of Bremen there have been identified all useful piezometers for a monitoring net around the landfill. During the sampling campaign there have been collected data of the principal anions and cations, physical parameters and stable water isotopes. Data analysis has been focused particularly on Cl, Na, SO4 and EC because these parameters might be helpful to attribute geochemical trends to the landfill or to a natural background. Furthermore dataloggers have been installed for a month in some piezometers and EC, pressure, dissolved oxygen and temperature data have been collected. Finally there has been made a deep comparison between current and historical data (1996 – 2011) and between old interpolation maps and current ones in order to see time trends of the aquifer geochemistry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

rnNitric oxide (NO) is important for several chemical processes in the atmosphere. Together with nitrogen dioxide (NO2 ) it is better known as nitrogen oxide (NOx ). NOx is crucial for the production and destruction of ozone. In several reactions it catalyzes the oxidation of methane and volatile organic compounds (VOCs) and in this context it is involved in the cycling of the hydroxyl radical (OH). OH is a reactive radical, capable of oxidizing most organic species. Therefore, OH is also called the “detergent” of the atmosphere. Nitric oxide originates from several sources: fossil fuel combustion, biomass burning, lightning and soils. Fossil fuel combustion is the largest source. The others are, depending on the reviewed literature, generally comparable to each other. The individual sources show a different temporal and spatial pattern in their magnitude of emission. Fossil fuel combustion is important in densely populated places, where NO from other sources is less important. In contrast NO emissions from soils (hereafter SNOx) or biomass burning are the dominant source of NOx in remote regions.rnBy applying an atmospheric chemistry global climate model (AC-GCM) I demonstrate that SNOx is responsible for a significant part of NOx in the atmosphere. Furthermore, it increases the O3 and OH mixing ratio substantially, leading to a ∼10% increase in the oxidizing efficiency of the atmosphere. Interestingly, through reduced O3 and OH mixing ratios in simulations without SNOx, the lifetime of NOx increases in regions with other dominating sources of NOx

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis focuses on the design and characterization of a novel, artificial minimal model membrane system with chosen physical parameters to mimic a nanoparticle uptake process driven exclusively by adhesion and softness of the bilayer. The realization is based on polymersomes composed of poly(dimethylsiloxane)-b-poly(2-methyloxazoline) (PMDS-b-PMOXA) and nanoscopic colloidal particles (polystyrene, silica), and the utilization of powerful characterization techniques. rnPDMS-b-PMOXA polymersomes with a radius, Rh ~100 nm, a size polydispersity, PD = 1.1 and a membrane thickness, h = 16 nm, were prepared using the film rehydratation method. Due to the suitable mechanical properties (Young’s modulus of ~17 MPa and a bending modulus of ~7⋅10-8 J) along with the long-term stability and the modifiability, these kind of polymersomes can be used as model membranes to study physical and physicochemical aspects of transmembrane transport of nanoparticles. A combination of photon (PCS) and fluorescence (FCS) correlation spectroscopies optimizes species selectivity, necessary for a unique internalization study encompassing two main efforts. rnFor the proof of concepts, the first effort focused on the interaction of nanoparticles (Rh NP SiO2 = 14 nm, Rh NP PS = 16 nm; cNP = 0.1 gL-1) and polymersomes (Rh P = 112 nm; cP = 0.045 gL-1) with fixed size and concentration. Identification of a modified form factor of the polymersome entities, selectively seen in the PCS experiment, enabled a precise monitor and quantitative description of the incorporation process. Combining PCS and FCS led to the estimation of the incorporated particles per polymersome (about 8 in the examined system) and the development of an appropriate methodology for the kinetics and dynamics of the internalization process. rnThe second effort aimed at the establishment of the necessary phenomenology to facilitate comparison with theories. The size and concentration of the nanoparticles were chosen as the most important system variables (Rh NP = 14 - 57 nm; cNP = 0.05 - 0.2 gL-1). It was revealed that the incorporation process could be controlled to a significant extent by changing the nanoparticles size and concentration. Average number of 7 up to 11 NPs with Rh NP = 14 nm and 3 up to 6 NPs with Rh NP = 25 nm can be internalized into the present polymersomes by changing initial nanoparticles concentration in the range 0.1- 0.2 gL-1. Rapid internalization of the particles by polymersomes is observed only above a critical threshold particles concentration, dependent on the nanoparticle size. rnWith regard possible pathways for the particle uptake, cryogenic transmission electron microscopy (cryo-TEM) has revealed two different incorporation mechanisms depending on the size of the involved nanoparticles: cooperative incorporation of nanoparticles groups or single nanoparticles incorporation. Conditions for nanoparticle uptake and controlled filling of polymersomes were presented. rnIn the framework of this thesis, the experimental observation of transmembrane transport of spherical PS and SiO2 NPs into polymersomes via an internalization process was reported and examined quantitatively for the first time. rnIn a summary the work performed in frames of this thesis might have significant impact on cell model systems’ development and thus improved understanding of transmembrane transport processes. The present experimental findings help create the missing phenomenology necessary for a detailed understanding of a phenomenon with great relevance in transmembrane transport. The fact that transmembrane transport of nanoparticles can be performed by artificial model system without any additional stimuli has a fundamental impact on the understanding, not only of the nanoparticle invagination process but also of the interaction of nanoparticles with biological as well as polymeric membranes. rn

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis the results of the multifrequency VLBA observations of the GPS 1944+5448 and the HFP J0111+3906 are presented. They are compact objects smaller than about 100 pc, completely embedded in the host galaxy. The availability of multi-epoch VLBI observations spanning more than 10 years, allowed us to compute the hot spot advance speed in order to obtain the kinematic age of both sources. Both radio sources are young, in agreement with the idea that they are in an early evolutionary stage. The spectral analysis of each source component, such as the lobes, the hot spots, the core and the jets, making a comparison with the theoretical ones is described. In addition the physical parameters derived from VLBA images as the magnetic field, the luminosity, the energy and the ambient medium density of both sources are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The first part of this three-part review on the relevance of laboratory testing of composites and adhesives deals with approval requirements for composite materials. We compare the in vivo and in vitro literature data and discuss the relevance of in vitro analyses. The standardized ISO protocols are presented, with a focus on the evaluation of physical parameters. These tests all have a standardized protocol that describes the entire test set-up. The tests analyse flexural strength, depth of cure, susceptibility to ambient light, color stability, water sorption and solubility, and radiopacity. Some tests have a clinical correlation. A high flexural strength, for instance, decreases the risk of fractures of the marginal ridge in posterior restorations and incisal edge build-ups of restored anterior teeth. Other tests do not have a clinical correlation or the threshold values are too low, which results in an approval of materials that show inferior clinical properties (e.g., radiopacity). It is advantageous to know the test set-ups and the ideal threshold values to correctly interpret the material data. Overall, however, laboratory assessment alone cannot ensure the clinical success of a product.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Microstructures and textures of calcite mylonites from the Morcles nappe large-scale shear zone in southwestern Switzerland develop principally as a function of 1) extrinsic physical parameters including temperature, stress, strain, strain rate and 2) intrinsic parameters, such as mineral composition. We collected rock samples at a single location from this shear zone, on which laboratory ultrasonic velocities, texture and microstructures were investigated and quantified. The samples had different concentration of secondary mineral phases (< 5 up to 40 vol.%). Measured seismic P wave anisotropy ranges from 6.5% for polyphase mylonites (~ 40 vol.%) to 18.4% in mylonites with < 5 vol.% secondary phases. Texture strength of calcite is the main factor governing the seismic P wave anisotropy. Measured S wave splitting is generally highest in the foliation plane, but its origin is more difficult to explain solely by calcite texture. Additional texture measurements were made on calcite mylonites with low concentration of secondary phases (≤ 10 vol.%) along the metamorphic gradient of the shear zone (15 km distance). A systematic increase in texture strength is observed moving from the frontal part of the shear zone (anchimetamorphism; 280 °C) to the higher temperature, basal part (greenschist facies; 350–400 °C). Calculated P wave velocities become increasingly anisotropic towards the high-strain part of the nappe, from an average of 5.8% in the frontal part to 13.2% in the root of the basal part. Secondary phases raise an additional complexity, and may act either to increase or decrease seismic anisotropy of shear zone mylonites. In light of our findings we reinterpret the origin of some seismically reflective layers in the Grône–Zweisimmen line in southwestern Switzerland (PNR20 Swiss National Research Program). We hypothesize that reflections originate in part from the lateral variation in textural and microstructural arrangement of calcite mylonites in shear zones.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry.^ A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Growth in plants results from the interaction between genetic and signalling networks and the mechanical properties of cells and tissues. There has been a recent resurgence in research directed at understanding the mechanical aspects of growth, and their feedback on genetic regulation. This has been driven in part by the development of new micro-indentation techniques to measure the mechanical properties of plant cells in vivo. However, the interpretation of indentation experiments remains a challenge, since the force measures results from a combination of turgor pressure, cell wall stiffness, and cell and indenter geometry. In order to interpret the measurements, an accurate mechanical model of the experiment is required. Here, we used a plant cell system with a simple geometry, Nicotiana tabacum Bright Yellow-2 (BY-2) cells, to examine the sensitivity of micro-indentation to a variety of mechanical and experimental parameters. Using a finite-element mechanical model, we found that, for indentations of a few microns on turgid cells, the measurements were mostly sensitive to turgor pressure and the radius of the cell, and not to the exact indenter shape or elastic properties of the cell wall. By complementing indentation experiments with osmotic experiments to measure the elastic strain in turgid cells, we could fit the model to both turgor pressure and cell wall elasticity. This allowed us to interpret apparent stiffness values in terms of meaningful physical parameters that are relevant for morphogenesis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A measurement of the B 0 s →J/ψϕ decay parameters, updated to include flavor tagging is reported using 4.9  fb −1 of integrated luminosity collected by the ATLAS detector from s √ =7  TeV pp collisions recorded in 2011 at the LHC. The values measured for the physical parameters are ϕ s 0.12±0.25(stat)±0.05(syst)  rad ΔΓ s 0.053±0.021(stat)±0.010(syst)  ps −1 Γ s 0.677±0.007(stat)±0.004(syst)  ps −1 |A ∥ (0)| 2 0.220±0.008(stat)±0.009(syst) |A 0 (0)| 2 0.529±0.006(stat)±0.012(syst) δ ⊥ =3.89±0.47(stat)±0.11(syst)  rad where the parameter ΔΓ s is constrained to be positive. The S -wave contribution was measured and found to be compatible with zero. Results for ϕ s and ΔΓ s are also presented as 68% and 95% likelihood contours, which show agreement with the Standard Model expectations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European Rosetta mission on its way to comet 67P/Churyumov-Gerasimenko will remain for more than a year in the close vicinity (1 km) of the comet. The two ROSINA mass spectrometers on board Rosetta are designed to analyze the neutral and ionized volatile components of the cometary coma. However, the relative velocity between the comet and the spacecraft will be minimal and also the velocity of the outgassing particles is below 1km∕s. This combination leads to very low ion energies in the surrounding plasma of the comet, typically below 20eV. Additionally, the spacecraft may charge up to a few volts in this environment. In order to simulate such plasma and to calibrate the mass spectrometers, a source for ions with very low energies had to be developed for the use in the laboratory together with the different gases expected at the comet. In this paper we present the design of this ion source and we discuss the physical parameters of the ion beam like sensitivity, energy distribution, and beam shape. Finally, we show the first ion measurements that have been performed together with one of the two mass spectrometers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Context. During September and October 2014, the OSIRIS cameras onboard the ESA Rosetta mission detected millions of single particles. Many of these dust particles appear as long tracks (due to both the dust proper motion and the spacecraft motion during the exposure time) with a clear brightness periodicity. Aims. We interpret the observed periodic features as a rotational and translational motion of aspherical dust grains. Methods. By counting the peaks of each track, we obtained statistics of a rotation frequency. We compared these results with the rotational frequency predicted by a model of aspherical dust grain dynamics in a model gas flow. By testing many possible sets of physical conditions and grain characteristics, we constrained the rotational properties of dust grains. Results. We analyzed on the motion of rotating aspherical dust grains with different cross sections in flow conditions corresponding to the coma of 67P/Churyumov-Gerasimenko qualitatively and quantitatively. Based on the OSIRIS observations, we constrain the possible physical parameters of the grains.