899 resultados para particle trajectory computation
Resumo:
The present thesis is concerned with the study of a quantum physical system composed of a small particle system (such as a spin chain) and several quantized massless boson fields (as photon gasses or phonon fields) at positive temperature. The setup serves as a simplified model for matter in interaction with thermal "radiation" from different sources. Hereby, questions concerning the dynamical and thermodynamic properties of particle-boson configurations far from thermal equilibrium are in the center of interest. We study a specific situation where the particle system is brought in contact with the boson systems (occasionally referred to as heat reservoirs) where the reservoirs are prepared close to thermal equilibrium states, each at a different temperature. We analyze the interacting time evolution of such an initial configuration and we show thermal relaxation of the system into a stationary state, i.e., we prove the existence of a time invariant state which is the unique limit state of the considered initial configurations evolving in time. As long as the reservoirs have been prepared at different temperatures, this stationary state features thermodynamic characteristics as stationary energy fluxes and a positive entropy production rate which distinguishes it from being a thermal equilibrium at any temperature. Therefore, we refer to it as non-equilibrium stationary state or simply NESS. The physical setup is phrased mathematically in the language of C*-algebras. The thesis gives an extended review of the application of operator algebraic theories to quantum statistical mechanics and introduces in detail the mathematical objects to describe matter in interaction with radiation. The C*-theory is adapted to the concrete setup. The algebraic description of the system is lifted into a Hilbert space framework. The appropriate Hilbert space representation is given by a bosonic Fock space over a suitable L2-space. The first part of the present work is concluded by the derivation of a spectral theory which connects the dynamical and thermodynamic features with spectral properties of a suitable generator, say K, of the time evolution in this Hilbert space setting. That way, the question about thermal relaxation becomes a spectral problem. The operator K is of Pauli-Fierz type. The spectral analysis of the generator K follows. This task is the core part of the work and it employs various kinds of functional analytic techniques. The operator K results from a perturbation of an operator L0 which describes the non-interacting particle-boson system. All spectral considerations are done in a perturbative regime, i.e., we assume that the strength of the coupling is sufficiently small. The extraction of dynamical features of the system from properties of K requires, in particular, the knowledge about the spectrum of K in the nearest vicinity of eigenvalues of the unperturbed operator L0. Since convergent Neumann series expansions only qualify to study the perturbed spectrum in the neighborhood of the unperturbed one on a scale of order of the coupling strength we need to apply a more refined tool, the Feshbach map. This technique allows the analysis of the spectrum on a smaller scale by transferring the analysis to a spectral subspace. The need of spectral information on arbitrary scales requires an iteration of the Feshbach map. This procedure leads to an operator-theoretic renormalization group. The reader is introduced to the Feshbach technique and the renormalization procedure based on it is discussed in full detail. Further, it is explained how the spectral information is extracted from the renormalization group flow. The present dissertation is an extension of two kinds of a recent research contribution by Jakšić and Pillet to a similar physical setup. Firstly, we consider the more delicate situation of bosonic heat reservoirs instead of fermionic ones, and secondly, the system can be studied uniformly for small reservoir temperatures. The adaption of the Feshbach map-based renormalization procedure by Bach, Chen, Fröhlich, and Sigal to concrete spectral problems in quantum statistical mechanics is a further novelty of this work.
Resumo:
Particle concentration is a principal factor that affects erosion rate of solid surfaces under particle impact, such as pipe bends in pneumatic conveyors; it is well known that a reduction in the specific erosion rate occurs under high particle concentrations, a phenomenon referred to as the “shielding effect”. The cause of shielding is believed to be increased likelihood of inter-particulate collisions, the high collision probability between incoming and rebounding particles reducing the frequency and the severity of particle impacts on the target surface. In this study, the effects of particle concentration on erosion of a mild steel bend surface have been investigated in detail using three different particulate materials on an industrial scale pneumatic conveying test rig. The materials were studied so that two had the same particle density but very different particle size, whereas two had very similar particle size but very different particle density. Experimental results confirm the shielding effect due to high particle concentration and show that the particle density has a far more significant influence than the particle size, on the magnitude of the shielding effect. A new method of correcting for change in erosivity of the particles in repeated handling, to take this factor out of the data, has been established, and appears to be successful. Moreover, a novel empirical model of the shielding effects has been used, in term of erosion resistance which appears to decrease linearly when the particle concentration decreases. With the model it is possible to find the specific erosion rate when the particle concentration tends to zero, and conversely predict how the specific erosion rate changes at finite values of particle concentration; this is critical to enable component life to be predicted from erosion tester results, as the variation of the shielding effect with concentration is different in these two scenarios. In addition a previously unreported phenomenon has been recorded, of a particulate material whose erosivity has steadily increased during repeated impacts.
Resumo:
Neuronal circuits in the retina analyze images according to qualitative aspects such as color or motion, before the information is transmitted to higher visual areas of the brain. One example, studied for over the last four decades, is the detection of motion direction in ‘direction selective’ neurons. Recently, the starburst amacrine cell, one type of retinal interneuron, has emerged as an essential player in the computation of direction selectivity. In this study the mechanisms underlying the computation of direction selective calcium signals in starburst cell dendrites were investigated using whole-cell electrical recordings and two-photon calcium imaging. Analysis of the somatic electrical responses to visual stimulation and pharmacological agents indicated that the directional signal (i) is not computed presynaptically to starburst cells or by inhibitory network interactions. It is thus computed via a cell-intrinsic mechanism, which (ii) depends upon the differential, i.e. direction selective, activation of voltage-gated channels. Optically measuring dendritic calcium signals as a function of somatic voltage suggests (iii) a difference in resting membrane potential between the starburst cell’s soma and its distal dendrites. In conclusion, it is proposed that the mechanism underlying direction selectivity in starburst cell dendrites relies on intrinsic properties of the cell, particularly on the interaction of spatio-temporally structured synaptic inputs with voltage-gated channels, and their differential activation due to a somato-dendritic difference in membrane potential.
Resumo:
Präsentiert wird ein vollständiger, exakter und effizienter Algorithmus zur Berechnung des Nachbarschaftsgraphen eines Arrangements von Quadriken (Algebraische Flächen vom Grad 2). Dies ist ein wichtiger Schritt auf dem Weg zur Berechnung des vollen 3D Arrangements. Dabei greifen wir auf eine bereits existierende Implementierung zur Berechnung der exakten Parametrisierung der Schnittkurve von zwei Quadriken zurück. Somit ist es möglich, die exakten Parameterwerte der Schnittpunkte zu bestimmen, diese entlang der Kurven zu sortieren und den Nachbarschaftsgraphen zu berechnen. Wir bezeichnen unsere Implementierung als vollständig, da sie auch die Behandlung aller Sonderfälle wie singulärer oder tangentialer Schnittpunkte einschließt. Sie ist exakt, da immer das mathematisch korrekte Ergebnis berechnet wird. Und schließlich bezeichnen wir unsere Implementierung als effizient, da sie im Vergleich mit dem einzigen bisher implementierten Ansatz gut abschneidet. Implementiert wurde unser Ansatz im Rahmen des Projektes EXACUS. Das zentrale Ziel von EXACUS ist es, einen Prototypen eines zuverlässigen und leistungsfähigen CAD Geometriekerns zu entwickeln. Obwohl wir das Design unserer Bibliothek als prototypisch bezeichnen, legen wir dennoch größten Wert auf Vollständigkeit, Exaktheit, Effizienz, Dokumentation und Wiederverwendbarkeit. Über den eigentlich Beitrag zu EXACUS hinaus, hatte der hier vorgestellte Ansatz durch seine besonderen Anforderungen auch wesentlichen Einfluss auf grundlegende Teile von EXACUS. Im Besonderen hat diese Arbeit zur generischen Unterstützung der Zahlentypen und der Verwendung modularer Methoden innerhalb von EXACUS beigetragen. Im Rahmen der derzeitigen Integration von EXACUS in CGAL wurden diese Teile bereits erfolgreich in ausgereifte CGAL Pakete weiterentwickelt.
Resumo:
One of the main targets of the CMS experiment is to search for the Standard Model Higgs boson. The 4-lepton channel (from the Higgs decay h->ZZ->4l, l = e,mu) is one of the most promising. The analysis is based on the identification of two opposite-sign, same-flavor lepton pairs: leptons are required to be isolated and to come from the same primary vertex. The Higgs would be statistically revealed by the presence of a resonance peak in the 4-lepton invariant mass distribution. The 4-lepton analysis at CMS is presented, spanning on its most important aspects: lepton identification, variables of isolation, impact parameter, kinematics, event selection, background control and statistical analysis of results. The search leads to an evidence for a signal presence with a statistical significance of more than four standard deviations. The excess of data, with respect to the background-only predictions, indicates the presence of a new boson, with a mass of about 126 GeV/c2 , decaying to two Z bosons, whose characteristics are compatible with the SM Higgs ones.
Resumo:
The most ocean - atmosphere exchanges take place in polar environments due to the low temperatures which favor the absorption processes of atmospheric gases, in particular CO2. For this reason, the alterations of biogeochemical cycles in these areas can have a strong impact on the global climate. With the aim of contributing to the definition of the mechanisms regulating the biogeochemical fluxes we have analyzed the particles collected in the Ross Sea in different years (ROSSMIZE, BIOSESO 1 and 2, ROAVERRS and ABIOCLEAR projects) in two sites (mooring A and B). So it has been developed a more efficient method to prepare sediment trap samples for the analyses. We have also processed satellite data of sea ice, chlorophyll a and diatoms concentration. At both sites, in each year considered, there was a high seasonal and inter-annual variability of biogeochemical fluxes closely correlated with sea ice cover and primary productivity. The comparison between the samples collected at mooring A and B in 2008 highlighted the main differences between these two sites. Particle fluxes at Mooring A, located in a polynia area, are higher than mooring B ones and they happen about a month before. In the mooring B area it has been possible to correlate the particles fluxes to the ice concentration anomalies and with the atmospheric changes in response to El Niño Southern Oscillations. In 1996 and 1999, years subjected to La Niña, the concentrations of sea ice in this area have been less than in 1998, year subjected to El Niño. Inverse correlation was found for 2005 and 2008. In the mooring A area significant differences in mass and biogenic fluxes during 2005 and 2008 has been recorded. This allowed to underline the high variability of lateral advection processes and to connect them to the physical forcing.
Resumo:
Hochreichende Konvektion über Waldbränden ist eine der intensivsten Formen von atmosphärischer Konvektion. Die extreme Wolkendynamik mit hohen vertikalen Windgeschwindigkeiten (bis 20 m/s) bereits an der Wolkenbasis, hohen Wasserdampfübersättigungen (bis 1%) und die durch das Feuer hohen Anzahlkonzentration von Aerosolpartikeln (bis 100000 cm^-3) bilden einen besonderen Rahmen für Aerosol-Wolken Wechselwirkungen.Ein entscheidender Schritt in der mikrophysikalischen Entwicklung einer konvektiven Wolke ist die Aktivierung von Aerosolpartikeln zu Wolkentropfen. Dieser Aktivierungsprozess bestimmt die anfängliche Anzahl und Größe der Wolkentropfen und kann daher die Entwicklung einer konvektiven Wolke und deren Niederschlagsbildung beeinflussen. Die wichtigsten Faktoren, welche die anfängliche Anzahl und Größe der Wolkentropfen bestimmen, sind die Größe und Hygroskopizität der an der Wolkenbasis verfügbaren Aerosolpartikel sowie die vertikale Windgeschwindigkeit. Um den Einfluss dieser Faktoren unter pyro-konvektiven Bedingungen zu untersuchen, wurden numerische Simulationen mit Hilfe eines Wolkenpaketmodells mit detaillierter spektraler Beschreibung der Wolkenmikrophysik durchgeführt. Diese Ergebnisse können in drei unterschiedliche Bereiche abhängig vom Verhältnis zwischen vertikaler Windgeschwindigkeit und Aerosolanzahlkonzentration (w/NCN) eingeteilt werden: (1) ein durch die Aerosolkonzentration limitierter Bereich (hohes w/NCN), (2) ein durch die vertikale Windgeschwindigkeit limitierter Bereich (niedriges w/NCN) und (3) ein Übergangsbereich (mittleres w/NCN). Die Ergebnisse zeigen, dass die Variabilität der anfänglichen Anzahlkonzentration der Wolkentropfen in (pyro-) konvektiven Wolken hauptsächlich durch die Variabilität der vertikalen Windgeschwindigkeit und der Aerosolkonzentration bestimmt wird. rnUm die mikrophysikalischen Prozesse innerhalb der rauchigen Aufwindregion einer pyrokonvektiven Wolke mit einer detaillierten spektralen Mikrophysik zu untersuchen, wurde das Paketmodel entlang einer Trajektorie innerhalb der Aufwindregion initialisiert. Diese Trajektore wurde durch dreidimensionale Simulationen eines pyro-konvektiven Ereignisses durch das Model ATHAM berechnet. Es zeigt sich, dass die Anzahlkonzentration der Wolkentropfen mit steigender Aerosolkonzentration ansteigt. Auf der anderen Seite verringert sich die Größe der Wolkentropfen mit steigender Aerosolkonzentration. Die Reduzierung der Verbreiterung des Tropfenspektrums stimmt mit den Ergebnissen aus Messungen überein und unterstützt das Konzept der Unterdrückung von Niederschlag in stark verschmutzen Wolken.Mit Hilfe des Models ATHAM wurden die dynamischen und mikrophysikalischen Prozesse von pyro-konvektiven Wolken, aufbauend auf einer realistischen Parametrisierung der Aktivierung von Aerosolpartikeln durch die Ergebnisse der Aktivierungsstudie, mit zwei- und dreidimensionalen Simulationen untersucht. Ein modernes zweimomenten mikrophysikalisches Schema wurde in ATHAM implementiert, um den Einfluss der Anzahlkonzentration von Aerosolpartikeln auf die Entwicklung von idealisierten pyro-konvektiven Wolken in US Standardamtosphären für die mittleren Breiten und den Tropen zu untersuchen. Die Ergebnisse zeigen, dass die Anzahlkonzentration der Aerosolpartikel die Bildung von Regen beeinflusst. Für geringe Aerosolkonzentrationen findet die rasche Regenbildung hauptsächlich durch warme mikrophysikalische Prozesse statt. Für höhere Aerosolkonzentrationen ist die Eisphase wichtiger für die Bildung von Regen. Dies führt zu einem verspäteten Einsetzen von Niederschlag für verunreinigtere Atmosphären. Außerdem wird gezeigt, dass die Zusammensetzung der Eisnukleationspartikel (IN) einen starken Einfluss auf die dynamische und mikrophysikalische Struktur solcher Wolken hat. Bei sehr effizienten IN bildet sich Regen früher. Die Untersuchung zum Einfluss des atmosphärischen Hintergrundprofils zeigt eine geringe Auswirkung der Meteorologie auf die Sensitivität der pyro-konvektiven Wolken auf diernAerosolkonzentration. Zum Abschluss wird gezeigt, dass die durch das Feuer emittierte Hitze einen deutlichen Einfluss auf die Entwicklung und die Wolkenobergrenze von pyro-konvektive Wolken hat. Zusammenfassend kann gesagt werden, dass in dieser Dissertation die Mikrophysik von pyrokonvektiven Wolken mit Hilfe von idealisierten Simulation eines Wolkenpaketmodell mit detaillierte spektraler Mikrophysik und eines 3D Modells mit einem zweimomenten Schema im Detail untersucht wurde. Es wird gezeigt, dass die extremen Bedingungen im Bezug auf die vertikale Windgeschwindigkeiten und Aerosolkonzentrationen einen deutlichen Einfluss auf die Entwicklung von pyro-konvektiven Wolken haben.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Götze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of König et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
The subject of the present thesis is about the enhancement of orbiter spacecraft navigation capabilities obtained by the standard radiometric link, taking advantage of an imaging payload and making use of a novel definition of optical measurements. An ESA Mission to Mercury called BepiColombo, was selected as a reference case for this study, and in particular its Mercury Planetary Orbiter (MPO), because of the presence of SIMBIO-SYS, an instrument suite part of the MPO payload, capable of acquiring high resolution images of the surface of Mercury. The use of optical measurements for navigation, can provide complementary informations with respect to Doppler, for enhanced performances or a relaxation of the radio tracking requisites in term of ground station schedule. Classical optical techniques based on centroids, limbs or landmarks, were the base to a novel idea for optical navigation, inspired by concepts of stereoscopic vision. In brief, the relation between two overlapped images acquired by a nadir pointed orbiter spacecraft at different times, was defined, and this information was then formulated into an optical measurement, to be processed by a navigation filter. The formulation of this novel optical observable is presented, moreover the analysis of the possible impact on the mission budget and images scheduling is addressed. Simulations are conducted using an orbit determination software already in use for spacecraft navigation in which the proposed optical measurements were implemented and the final results are given.
Resumo:
Co-evolving with the human host, gut microbiota establishes configurations, which vary under the pressure of inflammation, disease, ageing, diet and lifestyle. In order to describe the multi-stability of the microbiome-host relationship, we studied specific tracts of the bacterial trajectory during the human lifespan and we characterized peculiar deviations from the hypothetical development, caused by disease, using molecular techniques, such as phylogenetic microarray and next-generation sequencing. Firstly, we characterized the enterocyte-associated microbiota in breast-fed infants and adults, describing remarkable differences between the two groups of subjects. Successively, we investigated the impact of atopy on the development of the microbiome in Italian childrens, highlithing conspicuous deviations from the child-type microbiota of the Italian controls. To explore variation in the gut microbiota depending on geographical origins, which reflect different lifestyles, we compared the phylogenetic diversity of the intestinal microbiota of the Hadza hunter-gatherers of Tanzania and Italian adults. Additionally, we characterized the aged-type microbiome, describing the changes occurred in the metabolic potential of the gut microbiota of centenarians with respect to younger individuals, as a part of the pathophysiolology of the ageing process. Finally, we evaluated the impact of a probiotics intervention on the intestinal microbiota of elderly people, showing the repair of some age-related dysbioses. These studies contribute to elucidate several aspects of the intestinal microbiome development during the human lifespan, depicting the microbiota as an extremely plastic entity, capable of being reconfigured in response to different environmental factors and/or stressors of endogenous origin.
Resumo:
China is a large country characterized by remarkable growth and distinct regional diversity. Spatial disparity has always been a hot issue since China has been struggling to follow a balanced growth path but still confronting with unprecedented pressures and challenges. To better understand the inequality level benchmarking spatial distributions of Chinese provinces and municipalities and estimate dynamic trajectory of sustainable development in China, I constructed the Composite Index of Regional Development (CIRD) with five sub pillars/dimensions involving Macroeconomic Index (MEI), Science and Innovation Index (SCI), Environmental Sustainability Index (ESI), Human Capital Index (HCI) and Public Facilities Index (PFI), endeavoring to cover various fields of regional socioeconomic development. Ranking reports on the five sub dimensions and aggregated CIRD were provided in order to better measure the developmental degrees of 31 or 30 Chinese provinces and municipalities over 13 years from 1998 to 2010 as the time interval of three “Five-year Plans”. Further empirical applications of this CIRD focused on clustering and convergence estimation, attempting to fill up the gap in quantifying the developmental levels of regional comprehensive socioeconomics and estimating the dynamic convergence trajectory of regional sustainable development in a long run. Four clusters were benchmarked geographically-oriented in the map on the basis of cluster analysis, and club-convergence was observed in the Chinese provinces and municipalities based on stochastic kernel density estimation.
Resumo:
A highly dangerous situations for tractor driver is the lateral rollover in operating conditions. Several accidents, involving tractor rollover, have indeed been encountered, requiring the design of a robust Roll-Over Protective Structure (ROPS). The aim of the thesis was to evaluate tractor behaviour in the rollover phase so as to calculate the energy absorbed by the ROPS to ensure driver safety. A Mathematical Model representing the behaviour of a generic tractor during a lateral rollover, with the possibility of modifying the geometry, the inertia of the tractor and the environmental boundary conditions, is proposed. The purpose is to define a method allowing the prediction of the elasto-plastic behaviour of the subsequent impacts occurring in the rollover phase. A tyre impact model capable of analysing the influence of the wheels on the energy to be absorbed by the ROPS has been also developed. Different tractor design parameters affecting the rollover behaviour, such as mass and dimensions, have been considered. This permitted the evaluation of their influence on the amount of energy to be absorbed by the ROPS. The mathematical model was designed and calibrated with respect to the results of actual lateral upset tests carried out on a narrow-track tractor. The dynamic behaviour of the tractor and the energy absorbed by the ROPS, obtained from the actual tests, showed to match the results of the model developed. The proposed approach represents a valuable tool in understanding the dynamics (kinetic energy) and kinematics (position, velocity, angular velocity, etc.) of the tractor in the phases of lateral rollover and the factors mainly affecting the event. The prediction of the amount of energy to be absorbed in some cases of accident is possible with good accuracy. It can then help in designing protective structures or active security devices.
Resumo:
In this work, the well-known MC code FLUKA was used to simulate the GE PETrace cyclotron (16.5 MeV) installed at “S. Orsola-Malpighi” University Hospital (Bologna, IT) and routinely used in the production of positron emitting radionuclides. Simulations yielded estimates of various quantities of interest, including: the effective dose distribution around the equipment; the effective number of neutron produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41Ar, the assessment of the saturation yield of radionuclides used in nuclear medicine. The simulations were validated against experimental measurements in terms of physical and transport parameters to be used at the energy range of interest in the medical field. The validated model was also extensively used in several practical applications uncluding the direct cyclotron production of non-standard radionuclides such as 99mTc, the production of medical radionuclides at TRIUMF (Vancouver, CA) TR13 cyclotron (13 MeV), the complete design of the new PET facility of “Sacro Cuore – Don Calabria” Hospital (Negrar, IT), including the ACSI TR19 (19 MeV) cyclotron, the dose field around the energy selection system (degrader) of a proton therapy cyclotron, the design of plug-doors for a new cyclotron facility, in which a 70 MeV cyclotron will be installed, and the partial decommissioning of a PET facility, including the replacement of a Scanditronix MC17 cyclotron with a new TR19 cyclotron.
Resumo:
Die Herstellung von Polymer-Solarzellen aus wässriger Phase stellt eine attraktive Alternative zu der konventionellen lösemittelbasierten Formulierung dar. Die Vorteile der aus wässriger Lösung hergestellten Solarzellen liegen besonders in dem umweltschonenden Herstellungsprozess und in der Möglichkeit, druckbare optoelektronische Bauteile zu generieren. Die Prozessierbarkeit von hydrophoben Halbleitern im wässrigen Milieu wird durch die Dispergierung der Materialien, in Form von Nanopartikeln, erreicht. Der Transfer der Halbleiter in eine Dispersion erfolgt über die Lösemittelverdampfungsmethode. Die Idee der Verwendung von partikelbasierte Solarzellen wurde bereits umgesetzt, allerdings blieben eine genaue Charakterisierung der Partikel sowie ein umfassendes Verständnis des gesamten Fabrikationsvorgangs aus. Deshalb besteht das Ziel dieser Arbeit darin, einen detaillierten Einblick in den Herstellungsprozess von partikelbasierten Solarzellen zu erlangen, mögliche Schwächen aufzudecken, diese zu beseitigen, um so zukünftige Anwendungen zu verbessern. Zur Herstellung von Solarzellen aus wässrigen Dispersionen wurde Poly(3-hexylthiophen-2,5-diyl)/[6,6]-Phenyl-C61-Buttersäure-Methylester (P3HT/PCBM) als Donor/Akzeptor-System verwendet. Die Kernpunkte der Untersuchungen richteten sich zum einen die auf Partikelmorphologie und zum anderen auf die Generierung einer geeigneten Partikelschicht. Beide Parameter haben Auswirkungen auf die Solarzelleneffizienz. Die Morphologie wurde sowohl spektroskopisch über Photolumineszenz-Messungen, als auch visuell mittels Elektronenmikroskopie ermittelt. Auf diese Weise konnte die Partikelmorphologie vollständig aufgeklärt werden, wobei Parallelen zu der Struktur von lösemittelbasierten Solarzellen gefunden wurden. Zudem wurde eine Abhängigkeit der Morphologie von der Präparationstemperatur beobachtet, was eine einfache Steuerung der Partikelstruktur ermöglicht. Im Zuge der Partikelschichtausbildung wurden direkte sowie grenzflächenvermittelnde Beschichtungsmethoden herangezogen. Von diesen Techniken hatte sich aber nur die Rotationsbeschichtung als brauchbare Methode erwiesen, Partikel aus der Dispersion in einen homogenen Film zu überführen. Des Weiteren stand die Aufarbeitung der Partikelschicht durch Ethanol-Waschung und thermische Behandlung im Fokus dieser Arbeit. Beide Maßnahmen wirkten sich positiv auf die Effizienz der Solarzellen aus und trugen entscheidend zu einer Verbesserung der Zellen bei. Insgesamt liefern die gewonnen Erkenntnisse einen detaillierten Überblick über die Herausforderungen, welche bei dem Einsatz von wasserbasierten Dispersionen auftreten. Die Anforderungen partikelbasierter Solarzellen konnten offengelegt werden, dadurch gelang die Herstellung einer Solarzelle mit einer Effizienz von 0.53%. Dieses Ergebnis stellt jedoch noch nicht das Optimum dar und lässt noch Möglichkeiten für Verbesserungen offen.