918 resultados para particle filters
Resumo:
During the last decade advances in the field of sensor design and improved base materials have pushed the radiation hardness of the current silicon detector technology to impressive performance. It should allow operation of the tracking systems of the Large Hadron Collider (LHC) experiments at nominal luminosity (1034 cm-2s-1) for about 10 years. The current silicon detectors are unable to cope with such an environment. Silicon carbide (SiC), which has recently been recognized as potentially radiation hard, is now studied. In this work it was analyzed the effect of high energy neutron irradiation on 4H-SiC particle detectors. Schottky and junction particle detectors were irradiated with 1 MeV neutrons up to fluence of 1016 cm-2. It is well known that the degradation of the detectors with irradiation, independently of the structure used for their realization, is caused by lattice defects, like creation of point-like defect, dopant deactivation and dead layer formation and that a crucial aspect for the understanding of the defect kinetics at a microscopic level is the correct identification of the crystal defects in terms of their electrical activity. In order to clarify the defect kinetic it were carried out a thermal transient spectroscopy (DLTS and PICTS) analysis of different samples irradiated at increasing fluences. The defect evolution was correlated with the transport properties of the irradiated detector, always comparing with the un-irradiated one. The charge collection efficiency degradation of Schottky detectors induced by neutron irradiation was related to the increasing concentration of defects as function of the neutron fluence.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
Sekundäres organisches Aerosol (SOA) ist ein wichtiger Bestandteil von atmosphärischen Aerosolpartikeln. Atmosphärische Aerosole sind bedeutsam, da sie das Klima über direkte (Streuung und Absorption von Strahlung) und indirekte (Wolken-Kondensationskeime) Effekte beeinflussen. Nach bisherigen Schätzungen ist die SOA-Bildung aus biogenen Kohlenwasserstoffen global weit wichtiger als die SOA-Bildung aus anthropogenen Kohlenwasserstoffen. Reaktive Kohlenwasserstoffe, die in großen Mengen von der Vegetation emittiert werden und als die wichtigsten Vorläufersubstanzen für biogenes SOA gelten, sind die Terpene. In der vorliegenden Arbeit wurde eine Methode entwickelt, welche die Quantifizierung von aciden Produkten der Terpen-Oxidation ermöglicht. Die Abscheidung des größenselektierten Aerosols (PM 2.5) erfolgte auf Quarzfilter, die unter Zuhilfenahme von Ultraschall mittels Methanol extrahiert wurden. Nach Aufkonzentrierung und Lösungsmittelwechsel auf Wasser sowie Standardaddition wurden die Proben mit einer Kapillar-HPLC-ESI-MSn-Methode analysiert. Das verwendete Ionenfallen-Massenspektrometer (LCQ-DECA) bietet die Möglichkeit, Strukturaufklärung durch selektive Fragmentierung der Qasimolekülionen zu betreiben. Die Quantifizierung erfolgte teilweise im MS/MS-Modus, wodurch Selektivität und Nachweisgrenze verbessert werden konnten. Um Produkte der Terpen-Oxidation zu identifizieren, die nicht als Standards erhältlich waren, wurden Ozonolysexperimente durchgeführt. Dadurch gelang die Identifizierung einer Reihe von Oxidationsprodukten in Realproben. Neben schon bekannten Produkten der Terpen-Oxidation konnten einige Produkte erstmals in Realproben eindeutig als Produkte des α Pinens nachgewiesen werden. In den Proben der Ozonolyseexperimente konnten auch Produkte mit hohem Molekulargewicht (>300 u) nachgewiesen werden, die Ähnlichkeit zeigen zu den als Dimeren oder Polymeren in der Literatur bezeichneten Substanzen. Sie konnten jedoch nicht in Feldproben gefunden werden. Im Rahmen von 5 Messkampagnen in Deutschland und Finnland wurden Proben der atmosphärischen Partikelphase genommen. Die Quantifizierung von Produkten der Oxidation von α-Pinen, β-Pinen, 3-Caren, Sabinen und Limonen in diesen Proben ergab eine große zeitliche und örtliche Variationsbreite der Konzentrationen. Die Konzentration von Pinsäure bewegte sich beispielsweise zwischen etwa 0,4 und 21 ng/m³ während aller Messkampagnen. Es konnten stets Produkte verschiedener Terpene nachgewiesen werden. Produkte einiger Terpene eignen sich sogar als Markersubstanzen für verschiedene Pflanzenarten. Sabinen-Produkte wie Sabinsäure können als Marker für die Emissionen von Laubbäumen wie Buchen oder Birken verwendet werden, während Caren-Produkte wie Caronsäure als Marker für Nadelbäume, speziell Kiefern, verwendet werden können. Mit den quantifizierten Substanzen als Marker wurde unter zu Hilfenahme von Messungen des Gehaltes an organischem und elementarem Kohlenstoff im Aerosol der Anteil des sekundären organischen Aerosols (SOA) errechnet, der von der Ozonolyse der Terpene stammt. Erstaunlicherweise konnten nur 1% bis 8% des SOA auf die Ozonolyse der Terpene zurückgeführt werden. Dies steht im Gegensatz zu der bisherigen Meinung, dass die Ozonolyse der Terpene die wichtigste Quelle für biogenes SOA darstellt. Gründe für diese Diskrepanz werden in der Arbeit diskutiert. Um die atmosphärischen Prozesse der Bildung von SOA vollständig zu verstehen, müssen jedoch noch weitere Anstrengungen unternommen werden.
Resumo:
In this thesis, we have presented the preparation of highly crosslinked spherical photoreactive colloidal particles of radius about 10 nm based on the monomer trimethoxysilane. These particles are labeled chemically with two different dye systems (coumarin, cinnamate) which are known to show reversible photodimerization. By analyzing the change in particle size upon UV irradiation with dynamic light scattering, we could demonstrate that the partially reversible photoreaction in principle can be utilized to control increase and decrease of colloidal clusters. Here, selection of the appropriate wavelengths during the irradiation employing suitable optical filters proved to be very important. Next, we showed how photocrosslinking of our nanoparticles within the micrometer-sized thin oil shell of water-oil-water emulsion droplets leads to a new species of optically addressable microcontainers. The inner water droplet of these emulsions may contain drugs, dyes or other water-soluble components, leading to filled containers. Thickness, mechanical stability and light resistance of the container walls can be controlled in a simple way by the amount and adjustable photoreactivity (= No. of labels/particle) of the nanoparticles. Importantly, the chemical bonds between the nanoparticles constituting the microcapsule shell can be cleaved photochemically by irradiation with uv light. An additional major advantage is that filling our microcapsules with water-soluble substrate molecules is extremely simple using a solution of the guest molecules as inner water phase of the W/O/W-emulsion. This optically controlled destruction of our microcontainers thus opens up a pathway to controlled release of the enclosed components as illustrated by the example of enclosed cyclodextrin molecules.
Resumo:
The present thesis is concerned with the study of a quantum physical system composed of a small particle system (such as a spin chain) and several quantized massless boson fields (as photon gasses or phonon fields) at positive temperature. The setup serves as a simplified model for matter in interaction with thermal "radiation" from different sources. Hereby, questions concerning the dynamical and thermodynamic properties of particle-boson configurations far from thermal equilibrium are in the center of interest. We study a specific situation where the particle system is brought in contact with the boson systems (occasionally referred to as heat reservoirs) where the reservoirs are prepared close to thermal equilibrium states, each at a different temperature. We analyze the interacting time evolution of such an initial configuration and we show thermal relaxation of the system into a stationary state, i.e., we prove the existence of a time invariant state which is the unique limit state of the considered initial configurations evolving in time. As long as the reservoirs have been prepared at different temperatures, this stationary state features thermodynamic characteristics as stationary energy fluxes and a positive entropy production rate which distinguishes it from being a thermal equilibrium at any temperature. Therefore, we refer to it as non-equilibrium stationary state or simply NESS. The physical setup is phrased mathematically in the language of C*-algebras. The thesis gives an extended review of the application of operator algebraic theories to quantum statistical mechanics and introduces in detail the mathematical objects to describe matter in interaction with radiation. The C*-theory is adapted to the concrete setup. The algebraic description of the system is lifted into a Hilbert space framework. The appropriate Hilbert space representation is given by a bosonic Fock space over a suitable L2-space. The first part of the present work is concluded by the derivation of a spectral theory which connects the dynamical and thermodynamic features with spectral properties of a suitable generator, say K, of the time evolution in this Hilbert space setting. That way, the question about thermal relaxation becomes a spectral problem. The operator K is of Pauli-Fierz type. The spectral analysis of the generator K follows. This task is the core part of the work and it employs various kinds of functional analytic techniques. The operator K results from a perturbation of an operator L0 which describes the non-interacting particle-boson system. All spectral considerations are done in a perturbative regime, i.e., we assume that the strength of the coupling is sufficiently small. The extraction of dynamical features of the system from properties of K requires, in particular, the knowledge about the spectrum of K in the nearest vicinity of eigenvalues of the unperturbed operator L0. Since convergent Neumann series expansions only qualify to study the perturbed spectrum in the neighborhood of the unperturbed one on a scale of order of the coupling strength we need to apply a more refined tool, the Feshbach map. This technique allows the analysis of the spectrum on a smaller scale by transferring the analysis to a spectral subspace. The need of spectral information on arbitrary scales requires an iteration of the Feshbach map. This procedure leads to an operator-theoretic renormalization group. The reader is introduced to the Feshbach technique and the renormalization procedure based on it is discussed in full detail. Further, it is explained how the spectral information is extracted from the renormalization group flow. The present dissertation is an extension of two kinds of a recent research contribution by Jakšić and Pillet to a similar physical setup. Firstly, we consider the more delicate situation of bosonic heat reservoirs instead of fermionic ones, and secondly, the system can be studied uniformly for small reservoir temperatures. The adaption of the Feshbach map-based renormalization procedure by Bach, Chen, Fröhlich, and Sigal to concrete spectral problems in quantum statistical mechanics is a further novelty of this work.
Resumo:
It has been demonstrated that iodine does have an important influence on atmospheric chemistry, especially the formation of new particles and the enrichment of iodine in marine aerosols. It was pointed out that the most probable chemical species involved in the production or growth of these particles are iodine oxides, produced photochemically from biogenic halocarbon emissions and/or iodine emission from the sea surface. However, the iodine chemistry from gaseous to particulate phase in the coastal atmosphere and the chemical nature of the condensing iodine species are still not understood. A Tenax / Carbotrap adsorption sampling technique and a thermo-desorption / cryo-trap / GC-MS system has been further developed and improved for the volatile organic iodine species in the gas phase. Several iodo-hydrocarbons such as CH3I, C2H5I, CH2ICl, CH2IBr and CH2I2 etc., have been measured in samples from a calibration test gas source (standards), real air samples and samples from seaweeds / macro-algae emission experiments. A denuder sampling technique has been developed to characterise potential precursor compounds of coastal particle formation processes, such as molecular iodine in the gas phase. Starch, TMAH (TetraMethylAmmonium Hydroxide) and TBAH (TetraButylAmmonium Hydroxide) coated denuders were tested for their efficiencies to collect I2 at the inner surface, followed by a TMAH extraction and ICP/MS determination, adding tellurium as an internal standard. The developed method has been proved to be an effective, accurate and suitable process for I2 measurement in the field, with the estimated detection limit of ~0.10 ng∙L-1 for a sampling volume of 15 L. An H2O/TMAH-Extraction-ICP/MS method has been developed for the accurate and sensitive determination of iodine species in tropospheric aerosol particles. The particle samples were collected on cellulose-nitrate filters using conventional filter holders or on cellulose nitrate/tedlar-foils using a 5-stage Berner impactor for size-segregated particle analysis. The water soluble species as IO3- and I- were separated by anion exchanging process after water extraction. Non-water soluble species including iodine oxide and organic iodine were digested and extracted by TMAH. Afterwards the triple samples were analysed by ICP/MS. The detection limit for particulate iodine was determined to be 0.10~0.20 ng•m-3 for sampling volumes of 40~100 m3. The developed methods have been used in two field measurements in May 2002 and September 2003, at and around the Mace Head Atmospheric Research Station (MHARS) located at the west coast of Ireland. Elemental iodine as a precursor of the iodine chemistry in the coastal atmosphere, was determined in the gas phase at a seaweed hot-spot around the MHARS, showing I2 concentrations were in the range of 0~1.6 ng∙L-1 and indicating a positive correlation with the ozone concentration. A seaweed-chamber experiment performed at the field measurement station showed that the I2 emission rate from macro-algae was in the range of 0.019~0.022 ng•min-1•kg-1. During these experiments, nanometer-particle concentrations were obtained from the Scanning Mobility Particle Sizer (SMPS) measurements. Particle number concentrations were found to have a linear correlation with elemental iodine in the gas phase of the seaweeds chamber, showing that gaseous I2 is one of the important precursors of the new particle formation in the coastal atmosphere. Iodine contents in the particle phase were measured in both field campaigns at and around the field measurement station. Total iodine concentrations were found to be in the range of 1.0 ~ 21.0 ng∙m-3 in the PM2.5 samples. A significant correlation between the total iodine concentrations and the nanometer-particle number concentrations was observed. The particulate iodine species analysis indicated that iodide contents are usually higher than those of iodate in all samples, with ratios in the range of 2~5:1. It is possible that those water soluble iodine species are transferred through the sea-air interface into the particle phase. The ratio of water soluble (iodate + iodide) and non-water soluble species (probably iodine oxide and organic iodine compounds) was observed to be in the range of 1:1 to 1:2. It appears that higher concentrated non-water soluble species, as the products of the photolysis from the gas phase into the particle phase, can be obtained in those samples while the nucleation events occur. That supports the idea that iodine chemistry in the coastal boundary layer is linked with new particle formation events. Furthermore, artificial aerosol particles were formed from gaseous iodine sources (e.g. CH2I2) using a laboratory reaction-chamber experiment, in which the reaction constant of the CH2I2 photolysis was calculated to be based upon the first order reaction kinetic. The end products of iodine chemistry in the particle phase were identified and quantified.
Resumo:
Particle concentration is a principal factor that affects erosion rate of solid surfaces under particle impact, such as pipe bends in pneumatic conveyors; it is well known that a reduction in the specific erosion rate occurs under high particle concentrations, a phenomenon referred to as the “shielding effect”. The cause of shielding is believed to be increased likelihood of inter-particulate collisions, the high collision probability between incoming and rebounding particles reducing the frequency and the severity of particle impacts on the target surface. In this study, the effects of particle concentration on erosion of a mild steel bend surface have been investigated in detail using three different particulate materials on an industrial scale pneumatic conveying test rig. The materials were studied so that two had the same particle density but very different particle size, whereas two had very similar particle size but very different particle density. Experimental results confirm the shielding effect due to high particle concentration and show that the particle density has a far more significant influence than the particle size, on the magnitude of the shielding effect. A new method of correcting for change in erosivity of the particles in repeated handling, to take this factor out of the data, has been established, and appears to be successful. Moreover, a novel empirical model of the shielding effects has been used, in term of erosion resistance which appears to decrease linearly when the particle concentration decreases. With the model it is possible to find the specific erosion rate when the particle concentration tends to zero, and conversely predict how the specific erosion rate changes at finite values of particle concentration; this is critical to enable component life to be predicted from erosion tester results, as the variation of the shielding effect with concentration is different in these two scenarios. In addition a previously unreported phenomenon has been recorded, of a particulate material whose erosivity has steadily increased during repeated impacts.
Resumo:
One of the main targets of the CMS experiment is to search for the Standard Model Higgs boson. The 4-lepton channel (from the Higgs decay h->ZZ->4l, l = e,mu) is one of the most promising. The analysis is based on the identification of two opposite-sign, same-flavor lepton pairs: leptons are required to be isolated and to come from the same primary vertex. The Higgs would be statistically revealed by the presence of a resonance peak in the 4-lepton invariant mass distribution. The 4-lepton analysis at CMS is presented, spanning on its most important aspects: lepton identification, variables of isolation, impact parameter, kinematics, event selection, background control and statistical analysis of results. The search leads to an evidence for a signal presence with a statistical significance of more than four standard deviations. The excess of data, with respect to the background-only predictions, indicates the presence of a new boson, with a mass of about 126 GeV/c2 , decaying to two Z bosons, whose characteristics are compatible with the SM Higgs ones.
Resumo:
Im Rahmen dieser Arbeit wurde ein neuer Eiskeimzähler FINCH (Fast Ice Nucleus CHamber) entwickelt und erste Messungen von verschiedenen Testaerosolen im Labor und atmosphärischem Aerosol durchgeführt. Die Aerosolpartikel bzw. Ice Nuclei IN werden bei Temperaturen unter dem Gefrierpunkt und Übersättigungen in Bezug auf Eis zum Anwachsen zu Eiskristallen gebracht, um sie mittels optischer Detektion zu erfassen. In FINCH ist dies durch das Prinzip der Mischung realisiert, wodurch eine kontinuierliche Messung der IN-Anzahlkonzentration gewährleistet ist. Hierbei kann mit sehr hohen Sammelflussraten von bis zu 10 l/min gemessen werden. Ebenso ist ein schnelles Abfahren von verschiedenen Sättigungsverhältnissen in Bezug auf Eis in einem weiten Bereich von 0.9 - 1.7 bei konstanten Temperaturen bis zu −23 °C möglich. Die Detektion der Eiskristalle und damit der Bestimmung der IN-Anzahlkonzentration erfolgt über einen neu entwickelten optischen Sensor basierend auf der unterschiedlichen Depolarisation des zurückgestreuten Lichtes von Eiskristallen und unterkühlten Tropfen. In Labermessungen wurden Aktivierungstemperatur und -sättigungsverhältnis von Silberjodid AgI und Kaolinit vermessen. Die Resultate zeigten gute Übereinstimmungen mit Ergebnissen aus der Literatur sowie Parallelmessungen mit FRIDGE (FRankfurt Ice Deposition freezinG Experiment). FRIDGE ist eine statische Diffusionskammer zur Aktivierung und Auszählung von Eiskeimen, die auf einem Filter gesammelt wurden. Bei atmosphärischen Messungen auf dem Jungfraujoch(Schweiz) lagen die IN-Anzahlkonzentrationen mit bis zu 4 l−1 im Rahmen der aus der Literatur bekannten Werte. Messungen der Eiskristallresiduen von Mischwolken zeigten hingegen, dass nur jedes tausendste als Eiskeim im Depositionsmode aktiv ist. Hier scheinen andere Gefrierprozesse und sekundäre Eiskristallbildung von sehr großer Bedeutung für die Anzahlkonzentration der Eiskristallresiduen zu sein. Eine weitere Messung von atmosphärischem Aerosol in Frankfurt zeigte IN-Anzahlkonzentrationen bis zu 30 l−1 bei Aktivierungstemperaturen um −14 °C. Die parallele Probenahme auf Siliziumplättchen für die Messungen der IN-Anzahlkonzentration in FRIDGE ergaben Werte im gleichen Anzahlkonzentrationsbereich.
Resumo:
The most ocean - atmosphere exchanges take place in polar environments due to the low temperatures which favor the absorption processes of atmospheric gases, in particular CO2. For this reason, the alterations of biogeochemical cycles in these areas can have a strong impact on the global climate. With the aim of contributing to the definition of the mechanisms regulating the biogeochemical fluxes we have analyzed the particles collected in the Ross Sea in different years (ROSSMIZE, BIOSESO 1 and 2, ROAVERRS and ABIOCLEAR projects) in two sites (mooring A and B). So it has been developed a more efficient method to prepare sediment trap samples for the analyses. We have also processed satellite data of sea ice, chlorophyll a and diatoms concentration. At both sites, in each year considered, there was a high seasonal and inter-annual variability of biogeochemical fluxes closely correlated with sea ice cover and primary productivity. The comparison between the samples collected at mooring A and B in 2008 highlighted the main differences between these two sites. Particle fluxes at Mooring A, located in a polynia area, are higher than mooring B ones and they happen about a month before. In the mooring B area it has been possible to correlate the particles fluxes to the ice concentration anomalies and with the atmospheric changes in response to El Niño Southern Oscillations. In 1996 and 1999, years subjected to La Niña, the concentrations of sea ice in this area have been less than in 1998, year subjected to El Niño. Inverse correlation was found for 2005 and 2008. In the mooring A area significant differences in mass and biogenic fluxes during 2005 and 2008 has been recorded. This allowed to underline the high variability of lateral advection processes and to connect them to the physical forcing.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Götze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of König et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
Detection, localization and tracking of non-collaborative objects moving inside an area is of great interest to many surveillance applications. An ultra- wideband (UWB) multistatic radar is considered as a good infrastructure for such anti-intruder systems, due to the high range resolution provided by the UWB impulse-radio and the spatial diversity achieved with a multistatic configuration. Detection of targets, which are typically human beings, is a challenging task due to reflections from unwanted objects in the area, shadowing, antenna cross-talks, low transmit power, and the blind zones arised from intrinsic peculiarities of UWB multistatic radars. Hence, we propose more effective detection, localization, as well as clutter removal techniques for these systems. However, the majority of the thesis effort is devoted to the tracking phase, which is an essential part for improving the localization accuracy, predicting the target position and filling out the missed detections. Since UWB radars are not linear Gaussian systems, the widely used tracking filters, such as the Kalman filter, are not expected to provide a satisfactory performance. Thus, we propose the Bayesian filter as an appropriate candidate for UWB radars. In particular, we develop tracking algorithms based on particle filtering, which is the most common approximation of Bayesian filtering, for both single and multiple target scenarios. Also, we propose some effective detection and tracking algorithms based on image processing tools. We evaluate the performance of our proposed approaches by numerical simulations. Moreover, we provide experimental results by channel measurements for tracking a person walking in an indoor area, with the presence of a significant clutter. We discuss the existing practical issues and address them by proposing more robust algorithms.