964 resultados para Dispersion curves


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aerosol particles are likely important contributors to our future climate. Further, during recent years, effects on human health arising from emissions of particulate material have gained increasing attention. In order to quantify the effect of aerosols on both climate and human health we need to better quantify the interplay between sources and sinks of aerosol particle number and mass on large spatial scales. So far long-term, regional observations of aerosol properties have been scarce, but argued necessary in order to bring the knowledge of regional and global distribution of aerosols further. In this context, regional studies of aerosol properties and aerosol dynamics are truly important areas of investigation. This thesis is devoted to investigations of aerosol number size distribution observations performed through the course of one year encompassing observational data from five stations covering an area from southern parts of Sweden up to northern parts of Finland. This thesis tries to give a description of aerosol size distribution dynamics from both a quantitative and qualitative point of view. The thesis focuses on properties and changes in aerosol size distribution as a function of location, season, source area, transport pathways and links to various meteorological conditions. The investigations performed in this thesis show that although the basic behaviour of the aerosol number size distribution in terms of seasonal and diurnal characteristics is similar at all stations in the measurement network, the aerosol over the Nordic countries is characterised by a typically sharp gradient in aerosol number and mass. This gradient is argued to derive from geographical locations of the stations in relation to the dominant sources and transport pathways. It is clear that the source area significantly determine the aerosol size distribution properties, but it is obvious that transport condition in terms of frequency of precipitation and cloudiness in some cases even more strongly control the evolution of the number size distribution. Aerosol dynamic processes under clear sky transport are however likewise argued to be highly important. Southerly transport of marine air and northerly transport of air from continental sources is studied in detail under clear sky conditions by performing a pseudo-Lagrangian box model evaluation of the two type cases. Results from both modelling and observations suggest that nucleation events contribute to integral number increase during southerly transport of comparably clean marine air, while number depletion dominates the evolution of the size distribution during northerly transport. This difference is largely explained by different concentration of pre-existing aerosol surface associated with the two type cases. Mass is found to be accumulated in many of the individual transport cases studied. This mass increase was argued to be controlled by emission of organic compounds from the boreal forest. This puts the boreal forest in a central position for estimates of aerosol forcing on a regional scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new approach to perform calculations with the certain standard classes in cohomology of the moduli spaces of curves. It is based on an important lemma of Ionel relating the intersection theoriy of the moduli space of curves and that of the space of admissible coverings. As particular results, we obtain expressions of Hurwitz numbers in terms of the intersections in the tautological ring, expressions of the simplest intersection numbers in terms of Hurwitz numbers, an algorithm of calculation of certain correlators which are the subject of the Witten conjecture, an improved algorithm for intersections related to the Boussinesq hierarchy, expressions for the Hodge integrals over two-pointed ramification cycles, cut-and-join type equations for a large class of intersection numbers, etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, new precision experiments have become possible withthe high luminosity accelerator facilities at MAMIand JLab, supplyingphysicists with precision data sets for different hadronic reactions inthe intermediate energy region, such as pion photo- andelectroproduction and real and virtual Compton scattering.By means of the low energy theorem (LET), the global properties of thenucleon (its mass, charge, and magnetic moment) can be separated fromthe effects of the internal structure of the nucleon, which areeffectively described by polarizabilities. Thepolarizabilities quantify the deformation of the charge andmagnetization densities inside the nucleon in an applied quasistaticelectromagnetic field. The present work is dedicated to develop atool for theextraction of the polarizabilities from these precise Compton data withminimum model dependence, making use of the detailed knowledge of pionphotoproduction by means of dispersion relations (DR). Due to thepresence of t-channel poles, the dispersion integrals for two ofthe six Compton amplitudes diverge. Therefore, we have suggested to subtract the s-channel dispersion integrals at zero photon energy($nu=0$). The subtraction functions at $nu=0$ are calculated through DRin the momentum transfer t at fixed $nu=0$, subtracted at t=0. For this calculation, we use the information about the t-channel process, $gammagammatopipito Nbar{N}$. In this way, four of thepolarizabilities can be predicted using the unsubtracted DR in the $s$-channel. The other two, $alpha-beta$ and $gamma_pi$, are free parameters in ourformalism and can be obtained from a fit to the Compton data.We present the results for unpolarized and polarized RCS observables,%in the kinematics of the most recent experiments, and indicate anenhanced sensitivity to the nucleon polarizabilities in theenergy range between pion production threshold and the $Delta(1232)$-resonance.newlineindentFurthermore,we extend the DR formalism to virtual Compton scattering (radiativeelectron scattering off the nucleon), in which the concept of thepolarizabilities is generalized to the case of avirtual initial photon by introducing six generalizedpolarizabilities (GPs). Our formalism provides predictions for the fourspin GPs, while the two scalar GPs $alpha(Q^2)$ and $beta(Q^2)$ have to befitted to the experimental data at each value of $Q^2$.We show that at energies betweenpion threshold and the $Delta(1232)$-resonance position, thesensitivity to the GPs can be increased significantly, as compared tolow energies, where the LEX is applicable. Our DR formalism can be used for analysing VCS experiments over a widerange of energy and virtuality $Q^2$, which allows one to extract theGPs from VCS data in different kinematics with a minimum of model dependence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research is aimed at contributing to the identification of reliable fully predictive Computational Fluid Dynamics (CFD) methods for the numerical simulation of equipment typically adopted in the chemical and process industries. The apparatuses selected for the investigation, specifically membrane modules, stirred vessels and fluidized beds, were characterized by a different and often complex fluid dynamic behaviour and in some cases the momentum transfer phenomena were coupled with mass transfer or multiphase interactions. Firs of all, a novel modelling approach based on CFD for the prediction of the gas separation process in membrane modules for hydrogen purification is developed. The reliability of the gas velocity field calculated numerically is assessed by comparison of the predictions with experimental velocity data collected by Particle Image Velocimetry, while the applicability of the model to properly predict the separation process under a wide range of operating conditions is assessed through a strict comparison with permeation experimental data. Then, the effect of numerical issues on the RANS-based predictions of single phase stirred tanks is analysed. The homogenisation process of a scalar tracer is also investigated and simulation results are compared to original passive tracer homogenisation curves determined with Planar Laser Induced Fluorescence. The capability of a CFD approach based on the solution of RANS equations is also investigated for describing the fluid dynamic characteristics of the dispersion of organics in water. Finally, an Eulerian-Eulerian fluid-dynamic model is used to simulate mono-disperse suspensions of Geldart A Group particles fluidized by a Newtonian incompressible fluid as well as binary segregating fluidized beds of particles differing in size and density. The results obtained under a number of different operating conditions are compared with literature experimental data and the effect of numerical uncertainties on axial segregation is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this PhD thesis is the investigation of the photophysical properties of materials that can be exploited in solar energy conversion. In this context, my research was mainly focused on carbon nanotube-based materials and ruthenium complexes. The first part of the thesis is devoted to carbon nanotubes (CNT), which have unique physical and chemical properties, whose rational control is of substantial interest to widen their application perspectives in many fields. Our goals were (i) to develop novel procedures for supramolecular dispersion, using amphiphilic block copolymers, (ii) to investigate the photophysics of CNT-based multicomponent hybrids and understand the nature of photoinduced interactions between CNT and selected molecular systems such as porphyrins, fullerenes and oligo (p-phynylenevinylenes). We established a new protocol for the dispersion of SWCNTs in aqueous media via non-covalent interactions and demonstrated that some CNT-based hybrids are suitable for testing in PV devices. The second part of the work is focussed on the study of homoleptic and heteroleptic Ru(II) complexes with bipyridine and extended phenanthroline ligands. Our studies demonstrated that these compounds are potentially useful as light harvesting systems for solar energy conversion. Both CNT materials and Ru(II) complexes have turned out to be remarkable examples of photoactive systems. The morphological and photophysical characterization of CNT-based multicomponent systems allowed a satisfactory rationalization of the photoinduced interactions between the individual units, despite several hurdles related to the intrinsic properties of CNTs that prevent, for instance, the utilization of laser spectroscopic techniques. Overall, this work may prompt the design and development of new functional materials for photovoltaic devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In questa tesi si studiano alcune proprietà fondamentali delle funzioni Zeta e L associate ad una curva ellittica. In particolare, si dimostra la razionalità della funzione Zeta e l'ipotesi di Riemann per due famiglie specifiche di curve ellittiche. Si studia poi il problema dell'esistenza di un prolungamento analitico al piano complesso della funzione L di una curva ellittica con moltiplicazione complessa, attraverso l'analisi diretta di due casi particolari.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gli ammassi globulari rappresentano i laboratori ideali nei quali studiare la dinamica di sistemi ad N-corpi ed i suoi effetti sull’evoluzione stellare. Infatti, gli ammassi globulari sono gli unici sistemi astrofisici che, entro il tempo scala dell’età dell’Universo, sperimentano quasi tutti i processi di dinamica stellare noti. Questo lavoro di tesi si inserisce in un progetto a lungo termine volto a fornire una dettagliata caratterizzazione delle proprietà dinamiche degli ammassi globulari galattici. In questa ricerca, strumenti di fondamentale importanza sono il profilo di dispersione di velocità del sistema e la sua curva di rotazione. Per determinare le componenti radiali di questi profili cinematici in ammassi globulari galattici è necessario misurare la velocità lungo la linea di vista di un ampio campione di stelle membre, a differenti distanze dal centro. Seguendo un approccio multi-strumentale, è possibile campionare l’intera estensione radiale dell’ammasso utilizzando spettrografi multi-oggetto ad alta risoluzione spettrale nelle regioni intermedie/esterne, e spettrografi IFU con ottiche adattive per le regioni centrali (pochi secondi d’arco dal centro). Questo lavoro di tesi è volto a determinare il profilo di dispersione di velocità dell’ammasso globulare 47 Tucanae, campionando un’estensione radiale compresa tra circa 20'' e 13' dal centro. Per questo scopo sono state misurate le velocità radiali di circa un migliaio di stelle nella direzione di 47 Tucanae, utilizzando spettri ad alta risoluzione ottenuti con lo spettrografo multi-oggetto FLAMES montato al Very Large Telescope dell’ESO. Le velocità radiali sono state misurate utilizzando la tecnica di cross-correlazione tra gli spettri osservati e appropriati spettri teorici, e sono state ottenute accuratezze inferiori a 0.5km/s. Il campione così ottenuto (complementare a quello raccolto con strumenti IFU nelle regioni centrali) è fondamentale per costruire il profilo di dispersione di velocità dell’ammasso e la sua eventuale curva di rotazione. Questi dati, combinati col profilo di densità dell’ammasso precedentemente determinato, permetteranno di vincolare opportunamente modelli teorici come quelli di King (1966) o di Wilson (1975), e di arrivare così alla prima solida determinazione dei parametri strutturali e dinamici (raggi di core e di metà massa, tempo di rilassamento, parametro collisionale, etc.) e della massa totale e distribuzione di massa del sistema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reactive oxygen species (ROS) production is important in the toxicity of pathogenic particles such as fibres. We examined the oxidative potential of straight (50 microm and 10 microm) and tangled carbon nanotubes in a cell free assay, in vitro and in vivo using different dispersants. The cell free oxidative potential of tangled nanotubes was higher than for the straight fibres. In cultured macrophages tangled tubes exhibited significantly more ROS at 30 min, while straight tubes increased ROS at 4 h. ROS was significantly higher in bronchoalveolar lavage cells of animals instilled with tangled and 10 mum straight fibres, whereas the number of neutrophils increased only in animals treated with the long tubes. Addition of dispersants in the suspension media lead to enhanced ROS detection by entangled tubes in the cell-free system. Tangled fibres generated more ROS in a cell-free system and in cultured cells, while straight fibres generated a slower but more prolonged effect in animals.