988 resultados para ultrafast physics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurde die zeitaufgelöste Photoemissions Elektronenmikroskopie (TR-PEEM) für die in-situ Untersuchung ultraschneller dynamischer Prozesse in dünnen mikrostrukturierten magnetischen Schichten während eines rasch verändernden externen Magnetfelds entwickelt. Das Experiment basiert auf der Nutzung des XMCD-Kontrasts (X-ray magnetic circular dichroism) mit Hilfe des zirkularpolarisierten Lichts von Synchrotronstrahlungsquellen (Elektronenspeicherringen BESSY II (Berlin) und ESRF (Grenoble)) für die dynamische Darstellung der magnetischen Domänen während ultraschneller Magnetisierungsvorgänge. Die hier entwickelte Methode wurde als erfolgreiche Kombination aus einer hohen Orts- und Zeitauflösung (weniger als 55 nm bzw. 15 ps) realisiert. Mit der hier beschriebenen Methode konnte nachgewiesen werden, dass die Magnetisierungsdynamik in großen Permalloy-Mikrostrukturen (40 µm x 80 µm und 20 µm x 80 µm, 40 nm dick) durch inkohärente Drehung der Magnetisierung und mit der Bildung von zeitlich abhängigen Übergangsdomänen einher geht, die den Ummagnetisierungsvorgang blockieren. Es wurden neue markante Differenzen zwischen der magnetischen Response einer vorgegebenen Dünnfilm-Mikrostruktur auf ein gepulstes externes Magnetfeld im Vergleich zu dem quasi-statischen Fall gefunden. Dies betrifft die Erscheinung von transienten raumzeitlichen Domänenmustern und besonderen Detailstrukturen in diesen Mustern, welche im quasi-statischen Fall nicht auftreten. Es wurden Beispiele solcher Domänenmuster in Permalloy-Mikrostrukturen verschiedener Formen und Größen untersucht und diskutiert. Insbesondere wurde die schnelle Verbreiterung von Domänenwänden infolge des präzessionalen Magnetisierungsvorgangs, die Ausbildung von transienten Domänenwänden und transienten Vortizes sowie die Erscheinung einer gestreiften Domänenphase aufgrund der inkohärenten Drehung der Magnetisierung diskutiert. Ferner wurde die Methode für die Untersuchung von stehenden Spinwellen auf ultradünnen (16 µm x 32 µm groß und 10 nm dick) Permalloy-Mikrostrukturen herangezogen. In einer zum periodischen Anregungsfeld senkrecht orientierten rechteckigen Mikrostruktur wurde ein induziertes magnetisches Moment gefunden. Dieses Phänomen wurde als „selbstfangende“ Spinwellenmode interpretiert. Es wurde gezeigt, dass sich eine erzwungene Normalmode durch Verschiebung einer 180°-Néelwand stabilisiert. Wird das System knapp unterhalb seiner Resonanzfrequenz angeregt, passt sich die Magnetisierungsverteilung derart an, dass ein möglichst großer Teil der durch das Anregungsfeld eingebrachten Energie im System verbleibt. Über einem bestimmten Grenzwert verursacht die Spinwellenmode nahe der Resonanzfrequenz eine effektive Kraft senkrecht zur 180°-Néel-Wand. Diese entsteht im Zentrum der Mikrostruktur und wird durch die streufeldinduzierte Kraft kompensiert. Als zusätzliche Möglichkeit wurden die Streufelder von magnetischen Mikrostrukturen während der dynamischen Prozesse quantitativ bestimmt und das genaue zeitliche Profil des Streufelds untersucht. Es wurde gezeigt, dass das zeitaufgelöste Photoemissions Elektronenmikroskop als ultraschnelles oberflächensensitives Magnetometer eingesetzt werden kann.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on the creation and analysis of many-body states of interacting fermionic atoms in optical lattices. The realized system can be described by the Fermi-Hubbard hamiltonian, which is an important model for correlated electrons in modern condensed matter physics. In this way, ultra-cold atoms can be utilized as a quantum simulator to study solid state phenomena. The use of a Feshbach resonance in combination with a blue-detuned optical lattice and a red-detuned dipole trap enables an independent control over all relevant parameters in the many-body hamiltonian. By measuring the in-situ density distribution and doublon fraction it has been possible to identify both metallic and insulating phases in the repulsive Hubbard model, including the experimental observation of the fermionic Mott insulator. In the attractive case, the appearance of strong correlations has been detected via an anomalous expansion of the cloud that is caused by the formation of non-condensed pairs. By monitoring the in-situ density distribution of initially localized atoms during the free expansion in a homogeneous optical lattice, a strong influence of interactions on the out-of-equilibrium dynamics within the Hubbard model has been found. The reported experiments pave the way for future studies on magnetic order and fermionic superfluidity in a clean and well-controlled experimental system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the phenomenology of the Randall-Sundrum setup is investigated. In this context models with and without an enlarged SU(2)_L x SU(2)_R x U(1)_X x P_{LR} gauge symmetry, which removes corrections to the T parameter and to the Z b_L \bar b_L coupling, are compared with each other. The Kaluza-Klein decomposition is formulated within the mass basis, which allows for a clear understanding of various model-specific features. A complete discussion of tree-level flavor-changing effects is presented. Exact expressions for five dimensional propagators are derived, including Yukawa interactions that mediate flavor-off-diagonal transitions. The symmetry that reduces the corrections to the left-handed Z b \bar b coupling is analyzed in detail. In the literature, Randall-Sundrum models have been used to address the measured anomaly in the t \bar t forward-backward asymmetry. However, it will be shown that this is not possible within a natural approach to flavor. The rare decays t \to cZ and t \to ch are investigated, where in particular the latter could be observed at the LHC. A calculation of \Gamma_{12}^{B_s} in the presence of new physics is presented. It is shown that the Randall-Sundrum setup allows for an improved agreement with measurements of A_{SL}^s, S_{\psi\phi}, and \Delta\Gamma_s. For the first time, a complete one-loop calculation of all relevant Higgs-boson production and decay channels in the custodial Randall-Sundrum setup is performed, revealing a sensitivity to large new-physics scales at the LHC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit stelle ich Aspekte zu QCD Berechnungen vor, welche eng verknüpft sind mit der numerischen Auswertung von NLO QCD Amplituden, speziell der entsprechenden Einschleifenbeiträge, und der effizienten Berechnung von damit verbundenen Beschleunigerobservablen. Zwei Themen haben sich in der vorliegenden Arbeit dabei herauskristallisiert, welche den Hauptteil der Arbeit konstituieren. Ein großer Teil konzentriert sich dabei auf das gruppentheoretische Verhalten von Einschleifenamplituden in QCD, um einen Weg zu finden die assoziierten Farbfreiheitsgrade korrekt und effizient zu behandeln. Zu diesem Zweck wird eine neue Herangehensweise eingeführt welche benutzt werden kann, um farbgeordnete Einschleifenpartialamplituden mit mehreren Quark-Antiquark Paaren durch Shufflesummation über zyklisch geordnete primitive Einschleifenamplituden auszudrücken. Ein zweiter großer Teil konzentriert sich auf die lokale Subtraktion von zu Divergenzen führenden Poltermen in primitiven Einschleifenamplituden. Hierbei wurde im Speziellen eine Methode entwickelt, um die primitiven Einchleifenamplituden lokal zu renormieren, welche lokale UV Counterterme und effiziente rekursive Routinen benutzt. Zusammen mit geeigneten lokalen soften und kollinearen Subtraktionstermen wird die Subtraktionsmethode dadurch auf den virtuellen Teil in der Berechnung von NLO Observablen erweitert, was die voll numerische Auswertung der Einschleifenintegrale in den virtuellen Beiträgen der NLO Observablen ermöglicht. Die Methode wurde schließlich erfolgreich auf die Berechnung von NLO Jetraten in Elektron-Positron Annihilation im farbführenden Limes angewandt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scopo di questa tesi é di evidenziare le connessioni tra le categorie monoidali, l'equazione di Yang-Baxter e l’integrabilità di alcuni modelli. Oggetto prinacipale del nostro lavoro é stato il monoide di Frobenius e come sia connesso alle C∗-algebre. In questo contesto la totalità delle dimostrazioni sfruttano la strumentazione dell'algebra diagrammatica. Nel corso del lavoro di tesi sono state riprodotte tali dimostrazioni tramite il più familiare linguaggio dell’algebra multilineare allo scopo di rendere più fruibili questi risultati ad un raggio più ampio di potenziali lettori.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with the investigation of charge generation and recombination processes in three different polymer:fullerene photovoltaic blends by means of ultrafast time-resolved optical spectroscopy. The first donor polymer, namely poly[N-11"-henicosanyl-2,7-carbazole-alt-5,5-(4',7'-di-2-thienyl-2',1',3'-benzothiadiazole)] (PCDTBT), is a mid-bandgap polymer, the other two materials are the low-bandgap donor polymers poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b']-dithiophene)-alt-4,7-(2,1,3-benzothiadiazole) (PCPDTBT) and poly[(4,4'-bis(2-ethylhexyl)dithieno[3,2-b:2',3'-d]silole)-2,6-diyl-alt-(2,1,3-benzothiadiazole)-4,7-diyl] (PSBTBT). Despite their broader absorption, the low-bandgap polymers do not show enhanced photovoltaic efficiencies compared to the mid-bandgap system.rnrnTransient absorption spectroscopy revealed that energetic disorder plays an important role in the photophysics of PCDTBT, and that in a blend with PCBM geminate losses are small. The photophysics of the low-bandgap system PCPDTBT were strongly altered by adding a high boiling point cosolvent to the polymer:fullerene blend due to a partial demixing of the materials. We observed an increase in device performance together with a reduction of geminate recombination upon addition of the cosolvent. By applying model-free multi-variate curve resolution to the spectroscopic data, we found that fast non-geminate recombination due to polymer triplet state formation is a limiting loss channel in the low-bandgap material system PCPDTBT, whereas in PSBTBT triplet formation has a smaller impact on device performance, and thus higher efficiencies are obtained.rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diese Arbeit widmet sich der Untersuchung der photophysikalischen Prozesse, die in Mischungen von Elektronendonoren mit Elektronenakzeptoren zur Anwendung in organischen Solarzellen auftreten. Als Elektronendonoren werden das Copolymer PBDTTT-C, das aus Benzodithiophen- und Thienothiophene-Einheiten besteht, und das kleine Molekül p-DTS(FBTTh2)2, welches Silizium-überbrücktes Dithiophen, sowie fluoriertes Benzothiadiazol und Dithiophen beinhaltet, verwendet. Als Elektronenakzeptor finden ein planares 3,4:9,10-Perylentetracarbonsäurediimid-(PDI)-Derivat und verschiedene Fullerenderivate Anwendung. PDI-Derivate gelten als vielversprechende Alternativen zu Fullerenen aufgrund der durch chemische Synthese abstimmbaren strukturellen, optischen und elektronischen Eigenschaften. Das gewichtigste Argument für PDI-Derivate ist deren Absorption im sichtbaren Bereich des Sonnenspektrums was den Photostrom verbessern kann. Fulleren-basierte Mischungen übertreffen jedoch für gewöhnlich die Effizienz von Donor-PDI-Mischungen.rnUm den Nachteil der PDI-basierten Mischungen im Vergleich zu den entsprechenden Fulleren-basierten Mischungen zu identifizieren, werden die verschiedenen Donor-Akzeptor-Kombinationen auf ihre optischen, elektronischen und strukturellen Eigenschaften untersucht. Zeitaufgelöste Spektroskopie, vor allem transiente Absorptionsspektroskopie (TA), wird zur Analyse der Ladungsgeneration angewendet und der Vergleich der Donor-PDI Mischfilme mit den Donor-Fulleren Mischfilmen zeigt, dass die Bildung von Ladungstransferzuständen einen der Hauptverlustkanäle darstellt.rnWeiterhin werden Mischungen aus PBDTTT-C und [6,6]-Phenyl-C61-buttersäuremethylesther (PC61BM) mittels TA-Spektroskopie auf einer Zeitskala von ps bis µs untersucht und es kann gezeigt werden, dass der Triplettzustand des Polymers über die nicht-geminale Rekombination freier Ladungen auf einer sub-ns Zeitskala bevölkert wird. Hochentwickelte Methoden zur Datenanalyse, wie multivariate curve resolution (MCR), werden angewendet um überlagernde Datensignale zu trennen. Zusätzlich kann die Regeneration von Ladungsträgern durch Triplett-Triplett-Annihilation auf einer ns-µs Zeitskala gezeigt werden. Darüber hinaus wird der Einfluss des Lösungsmitteladditivs 1,8-Diiodooctan (DIO) auf die Leistungsfähigkeit von p-DTS(FBTTh2)2:PDI Solarzellen untersucht. Die Erkenntnisse von morphologischen und photophysikalischen Experimenten werden kombiniert, um die strukturellen Eigenschaften und die Photophysik mit den relevanten Kenngrößen des Bauteils in Verbindung zu setzen. Zeitaufgelöste Photolumineszenzmessungen (time-resolved photoluminescence, TRPL) zeigen, dass der Einsatz von DIO zu einer geringeren Reduzierung der Photolumineszenz führt, was auf eine größere Phasentrennung zurückgeführt werden kann. Außerdem kann mittels TA Spektroskopie gezeigt werden, dass die Verwendung von DIO zu einer verbesserten Kristallinität der aktiven Schicht führt und die Generation freier Ladungen fördert. Zur genauen Analyse des Signalzerfalls wird ein Modell angewendet, das den gleichzeitigen Zerfall gebundener CT-Zustände und freier Ladungen berücksichtigt und optimierte Donor-Akzeptor-Mischungen zeigen einen größeren Anteil an nicht-geminaler Rekombination freier Ladungsträger.rnIn einer weiteren Fallstudie wird der Einfluss des Fullerenderivats, namentlich IC60BA und PC71BM, auf die Leistungsfähigkeit und Photophysik der Solarzellen untersucht. Eine Kombination aus einer Untersuchung der Struktur des Dünnfilms sowie zeitaufgelöster Spektroskopie ergibt, dass Mischungen, die ICBA als Elektronenakzeptor verwenden, eine schlechtere Trennung von Ladungstransferzuständen zeigen und unter einer stärkeren geminalen Rekombination im Vergleich zu PCBM-basierten Mischungen leiden. Dies kann auf die kleinere Triebkraft zur Ladungstrennung sowie auf die höhere Unordnung der ICBA-basierten Mischungen, die die Ladungstrennung hemmen, zurückgeführt werden. Außerdem wird der Einfluss reiner Fullerendomänen auf die Funktionsfähigkeit organischer Solarzellen, die aus Mischungen des Thienothienophen-basierenden Polymers pBTTT-C14 und PC61BM bestehen, untersucht. Aus diesem Grund wird die Photophysik von Filmen mit einem Donor-Akzeptor-Mischungsverhältnis von 1:1 sowie 1:4 verglichen. Während 1:1-Mischungen lediglich eine co-kristalline Phase, in der Fullerene zwischen den Seitenketten von pBTTT interkalieren, zeigen, resultiert der Überschuss an Fulleren in den 1:4-Proben in der Ausbildung reiner Fullerendomänen zusätzlich zu der co kristallinen Phase. Transiente Absorptionsspektroskopie verdeutlicht, dass Ladungstransferzustände in 1:1-Mischungen hauptsächlich über geminale Rekombination zerfallen, während in 1:4 Mischungen ein beträchtlicher Anteil an Ladungen ihre wechselseitige Coulombanziehung überwinden und freie Ladungsträger bilden kann, die schließlich nicht-geminal rekombinieren.