913 resultados para Discrete Time Branching Processes
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
Evolutionary processes within the bird genus Certhia (treecreepers) are investigated and taxonomic uncertainties clarified. The original seven species of the genus have Holarctic distribution, are uniform morphologically and hence difficult to distinguish. I employed four methodological approaches. 1. Molecular phylogeny using the mitochondrial cytochrome-b gene largely established relationships and revealed two cryptic species. 2. Call and song recordings from all species and many subspecies were evaluated sonagraphically. The nine phylospecies outlined in Part 1 were clearly delimited from one another by time and frequency parameters. They comprise a monophyletic group of "motif singers" and a purely southeast Asian group of "trill singers". Song-character differences were generally consistent with molecular phylogeny (strong phylogenetic signals). 3. Central European Certhia familiaris in the field responded territorially to playback of verses of allopatric "motif singer" taxa, but usually more weakly than to their own subsequently presented songs. No song characters were unambiguously recognised as species-specific. 4. Standard body dimensions of nearly 2000 museum specimens characterise species and subspecies biometrically and reveal geographic trends. Lengths of bill and hind claw proved important parameters to explain the treecreeper lifestyle (climbing and feeding on tree trunks). In the Himalayas (highest species density) tail dimensions are also significant.
Resumo:
The relationship between emotion and cognition is a topic that raises great interest in research. Recently, a view of these two processes as interactive and mutually influencing each other has become predominant. This dissertation investigates the reciprocal influences of emotion and cognition, both at behavioral and neural level, in two specific fields, such as attention and decision-making. Experimental evidence on how emotional responses may affect perceptual and attentional processes has been reported. In addition, the impact of three factors, such as personality traits, motivational needs and social context, in modulating the influence that emotion exerts on perception and attention has been investigated. Moreover, the influence of cognition on emotional responses in decision-making has been demonstrated. The current experimental evidence showed that cognitive brain regions such as the dorsolateral prefrontal cortex are causally implicated in regulation of emotional responses and that this has an effect at both pre and post decisional stages. There are two main conclusions of this dissertation: firstly, emotion exerts a strong influence on perceptual and attentional processes but, at the same time, this influence may also be modulated by other factors internal and external to the individuals. Secondly, cognitive processes may modulate emotional prepotent responses, by serving a regulative function critical to driving and shaping human behavior in line with current goals.
Resumo:
During this work has been developed an innovative methodology for continuous and in situ gas monitoring (24/24 h) of fumarolic and soil diffusive emissions applied to the geothermal and volcanic area of Pisciarelli near Agnano inside the Campi Flegrei caldera (CFc). In literature there are only scattered and in discrete data of the geochemical gas composition of fumarole at Campi Flegrei; it is only since the early ’80 that exist a systematic record of fumaroles with discrete sampling at Solfatara (Bocca Grande and Bocca Nuova fumaroles) and since 1999, even at the degassing areas of Pisciarelli. This type of sampling has resulted in a time series of geochemical analysis with discontinuous periods of time set (in average 2-3 measurements per month) completely inadequate for the purposes of Civil Defence in such high volcanic risk and densely populated areas. For this purpose, and to remedy this lack of data, during this study was introduced a new methodology of continuous and in situ sampling able to continuously detect data related and from its soil diffusive degassing. Due to its high sampling density (about one measurement per minute therefore producing 1440 data daily) and numerous species detected (CO2, Ar, 36Ar, CH4, He, H2S, N2, O2) allowing a good statistic record and the reconstruction of the gas composition evolution of the investigated area. This methodology is based on continuous sampling of fumaroles gases and soil degassing using an extraction line, which after undergoing a series of condensation processes of the water vapour content - better described hereinafter - is analyzed through using a quadrupole mass spectrometer
Resumo:
Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn
Resumo:
Stylolites are rough paired surfaces, indicative of localized stress-induced dissolution under a non-hydrostatic state of stress, separated by a clay parting which is believed to be the residuum of the dissolved rock. These structures are the most frequent deformation pattern in monomineralic rocks and thus provide important information about low temperature deformation and mass transfer. The intriguing roughness of stylolites can be used to assess amount of volume loss and paleo-stress directions, and to infer the destabilizing processes during pressure solution. But there is little agreement on how stylolites form and why these localized pressure solution patterns develop their characteristic roughness.rnNatural bedding parallel and vertical stylolites were studied in this work to obtain a quantitative description of the stylolite roughness and understand the governing processes during their formation. Adapting scaling approaches based on fractal principles it is demonstrated that stylolites show two self affine scaling regimes with roughness exponents of 1.1 and 0.5 for small and large length scales separated by a crossover length at the millimeter scale. Analysis of stylolites from various depths proved that this crossover length is a function of the stress field during formation, as analytically predicted. For bedding parallel stylolites the crossover length is a function of the normal stress on the interface, but vertical stylolites show a clear in-plane anisotropy of the crossover length owing to the fact that the in-plane stresses (σ2 and σ3) are dissimilar. Therefore stylolite roughness contains a signature of the stress field during formation.rnTo address the origin of stylolite roughness a combined microstructural (SEM/EBSD) and numerical approach is employed. Microstructural investigations of natural stylolites in limestones reveal that heterogeneities initially present in the host rock (clay particles, quartz grains) are responsible for the formation of the distinctive stylolite roughness. A two-dimensional numerical model, i.e. a discrete linear elastic lattice spring model, is used to investigate the roughness evolving from an initially flat fluid filled interface induced by heterogeneities in the matrix. This model generates rough interfaces with the same scaling properties as natural stylolites. Furthermore two coinciding crossover phenomena in space and in time exist that separate length and timescales for which the roughening is either balanced by surface or elastic energies. The roughness and growth exponents are independent of the size, amount and the dissolution rate of the heterogeneities. This allows to conclude that the location of asperities is determined by a polimict multi-scale quenched noise, while the roughening process is governed by inherent processes i.e. the transition from a surface to an elastic energy dominated regime.rn
Resumo:
Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.
Resumo:
The present study is based on the use of isotopes for evaluating the efficiency of nutrients removal of a wetland, in particular nitrogen and nitrates, also between the different habitats present in the wetland. Nutrients like nitrogen and phosphorus, normally distributed as fertilizers, are among the principal causes of diffuse pollution. This is particularly important in the Adriatic Sea, which is frequently subjected to eutrophication phenomena. So it is very crucial requalification of wetland, in which there are naturally depurative processes such as denitrification and plant uptake, which allow the reduction of pollutant loads that flow in water bodies. In this study nutrient reduction is analyzed in the wetland of the Comuna drain, which waters flow in the Venice lagoon. Chemical and isotopical analyses were performed on samples of water, vegetation, soil and sediments taken in the wetlands of the Comuna drain in four different periods of the year and on data of nitrogen and phosphorus concentration obtained by the LASA of the University of Padova. Values of total nitrogen and nitrates were obtained in order to evaluate the reduction within the different systems of the wetland. Instead, the isotopic values of nitrogen and carbon were used to evaluate which process influence more nitrogen reduction and to understand the origin of the nutrient, if it is from fertilizers, waste water or sewage. To conclude, the most important process in the wetland of the Comuna drain is plant uptake, in facts the bigger percentage of nitrogen reduction was in the period of vegetative growth. So it is important the study of isotopes in plant tissues and water residence time, whose increase would allow a greater reduction of nutrients.
Resumo:
Changepoint analysis is a well established area of statistical research, but in the context of spatio-temporal point processes it is as yet relatively unexplored. Some substantial differences with regard to standard changepoint analysis have to be taken into account: firstly, at every time point the datum is an irregular pattern of points; secondly, in real situations issues of spatial dependence between points and temporal dependence within time segments raise. Our motivating example consists of data concerning the monitoring and recovery of radioactive particles from Sandside beach, North of Scotland; there have been two major changes in the equipment used to detect the particles, representing known potential changepoints in the number of retrieved particles. In addition, offshore particle retrieval campaigns are believed may reduce the particle intensity onshore with an unknown temporal lag; in this latter case, the problem concerns multiple unknown changepoints. We therefore propose a Bayesian approach for detecting multiple changepoints in the intensity function of a spatio-temporal point process, allowing for spatial and temporal dependence within segments. We use Log-Gaussian Cox Processes, a very flexible class of models suitable for environmental applications that can be implemented using integrated nested Laplace approximation (INLA), a computationally efficient alternative to Monte Carlo Markov Chain methods for approximating the posterior distribution of the parameters. Once the posterior curve is obtained, we propose a few methods for detecting significant change points. We present a simulation study, which consists in generating spatio-temporal point pattern series under several scenarios; the performance of the methods is assessed in terms of type I and II errors, detected changepoint locations and accuracy of the segment intensity estimates. We finally apply the above methods to the motivating dataset and find good and sensible results about the presence and quality of changes in the process.
Resumo:
Tonalite-trondhjemite-granodiorite (TTG) gneisses form up to two-thirds of the preserved Archean continental crust and there is considerable debate regarding the primary magmatic processes of the generation of these rocks. The popular theories indicate that these rocks were formed by partial melting of basaltic oceanic crust which was previously metamorphosed to garnet-amphibolite and/or eclogite facies conditions either at the base of thick oceanic crust or by subduction processes.rnThis study investigates a new aspect regarding the source rock for Archean continental crust which is inferred to have had a bulk compostion richer in magnesium (picrite) than present-day basaltic oceanic crust. This difference is supposed to originate from a higher geothermal gradient in the early Archean which may have induced higher degrees of partial melting in the mantle, which resulted in a thicker and more magnesian oceanic crust. rnThe methods used to investigate the role of a more MgO-rich source rock in the formation of TTG-like melts in the context of this new approach are mineral equilibria calculations with the software THERMOCALC and high-pressure experiments conducted from 10–20 kbar and 900–1100 °C, both combined in a forward modelling approach. Initially, P–T pseudosections for natural rock compositions with increasing MgO contents were calculated in the system NCFMASHTO (Na2O–CaO–FeO–MgO–Al2O3–SiO2–H2O–TiO2) to ascertain the metamorphic products from rocks with increasing MgO contents from a MORB up to a komatiite. A small number of previous experiments on komatiites showed the development of pyroxenite instead of eclogite and garnet-amphibolite during metamorphism and established that melts of these pyroxenites are of basaltic composition, thus again building oceanic crust instead of continental crust.rnThe P–T pseudosections calculated represent a continuous development of their metamorphic products from amphibolites and eclogites towards pyroxenites. On the basis of these calculations and the changes within the range of compositions, three picritic Models of Archean Oceanic Crust (MAOC) were established with different MgO contents (11, 13 and 15 wt%) ranging between basalt and komatiite. The thermodynamic modelling for MAOC 11, 13 and 15 at supersolidus conditions is imprecise since no appropriate melt model for metabasic rocks is currently available and the melt model for metapelitic rocks resulted in unsatisfactory calculations. The partially molten region is therfore covered by high-pressure experiments. The results of the experiments show a transition from predominantly tonalitic melts in MAOC 11 to basaltic melts in MAOC 15 and a solidus moving towards higher temperatures with increasing magnesium in the bulk composition. Tonalitic melts were generated in MAOC 11 and 13 at pressures up to 12.5 kbar in the presence of garnet, clinopyroxene, plagioclase plus/minus quartz (plus/minus orthopyroxene in the presence of quartz and at lower pressures) in the absence of amphibole but it could not be explicitly indicated whether the tonalitic melts coexisting with an eclogitic residue and rutile at 20 kbar do belong to the Archean TTG suite. Basaltic melts were generated predominantly in the presence of granulite facies residues such as amphibole plus/minus garnet, plagioclase, orthopyroxene that lack quartz in all MAOC compositions at pressures up to 15 kbar. rnThe tonalitic melts generated in MAOC 11 and 13 indicate that thicker oceanic crust with more magnesium than that of a modern basalt is also a viable source for the generation of TTG-like melts and therefore continental crust in the Archean. The experimental results are related to different geologic settings as a function of pressure. The favoured setting for the generation of early TTG-like melts at 15 kbar is the base of an oceanic crust thicker than existing today or by melting of slabs in shallow subduction zones, both without interaction of tonalic melts with the mantle. Tonalitic melts at 20 kbar may have been generated below the plagioclase stability by slab melting in deeper subduction zones that have developed with time during the progressive cooling of the Earth, but it is unlikely that those melts reached lower pressure levels without further mantle interaction.rn
Modelling, diagnostics and experimental analysis of plasma assisted processes for material treatment
Resumo:
This work presents results from experimental investigations of several different atmospheric pressure plasmas applications, such as Metal Inert Gas (MIG) welding and Plasma Arc Cutting (PAC) and Welding (PAW) sources, as well as Inductively Coupled Plasma (ICP) torches. The main diagnostic tool that has been used is High Speed Imaging (HSI), often assisted by Schlieren imaging to analyse non-visible phenomena. Furthermore, starting from thermo-fluid-dynamic models developed by the University of Bologna group, such plasma processes have been studied also with new advanced models, focusing for instance on the interaction between a melting metal wire and a plasma, or considering non-equilibrium phenomena for diagnostics of plasma arcs. Additionally, the experimental diagnostic tools that have been developed for industrial thermal plasmas have been used also for the characterization of innovative low temperature atmospheric pressure non equilibrium plasmas, such as dielectric barrier discharges (DBD) and Plasma Jets. These sources are controlled by few kV voltage pulses with pulse rise time of few nanoseconds to avoid the formation of a plasma arc, with interesting applications in surface functionalization of thermosensitive materials. In order to investigate also bio-medical applications of thermal plasma, a self-developed quenching device has been connected to an ICP torch. Such device has allowed inactivation of several kinds of bacteria spread on petri dishes, by keeping the substrate temperature lower than 40 degrees, which is a strict requirement in order to allow the treatment of living tissues.
Resumo:
This PhD thesis is focused on cold atmospheric plasma treatments (GP) for microbial inactivation in food applications. In fact GP represents a promising emerging technology alternative to the traditional methods for the decontamination of foods. The objectives of this work were to evaluate: - the effects of GP treatments on microbial inactivation in model systems and in real foods; - the stress response in L. monocytogenes following exposure to different GP treatments. As far as the first aspect, inactivation curves were obtained for some target pathogens, i.e. Listeria monocytogenes and Escherichia coli, by exposing microbial cells to GP generated with two different DBD equipments and processing conditions (exposure time, material of the electrodes). Concerning food applications, the effects of different GP treatments on the inactivation of natural microflora and Listeria monocytogenes, Salmonella Enteritidis and Escherichia coli on the surface of Fuji apples, soya sprouts and black pepper were evaluated. In particular the efficacy of the exposure to gas plasma was assessed immediately after treatments and during storage. Moreover, also possible changes in quality parameters such as colour, pH, Aw, moisture content, oxidation, polyphenol-oxidase activity, antioxidant activity were investigated. Since the lack of knowledge of cell targets of GP may limit its application, the possible mechanism of action of GP was studied against 2 strains of Listeria monocytogenes by evaluating modifications in the fatty acids of the cytoplasmic membrane (through GC/MS analysis) and metabolites detected by SPME-GC/MS and 1H-NMR analyses. Moreover, changes induced by different treatments on the expression of selected genes related to general stress response, virulence or to the metabolism were detected with Reverse Transcription-qPCR. In collaboration with the Scripps Research Institute (La Jolla, CA, USA) also proteomic profiles following gas plasma exposure were analysed through Multidimensional Protein Identification Technology (MudPIT) to evaluate possible changes in metabolic processes.
Resumo:
Polymerbasierte Kolloide mit Groen im Nanometerbereich werden als aussichts- reiche Kandidaten fur die Verkapselung und den Transport von pharmazeutischen Wirkstoen angesehen. Daher ist es wichtig die physikalischen Prozesse, die die Bil- dung, Struktur und kinetische Stabilitat der polymerbasierten Kolloide beein ussen, besser zu verstehen. Allerdings ist die Untersuchung dieser Prozesse fur nanome- tergroe Objekte kompliziert und erfordert fortgeschrittene Techniken. In dieser Arbeit beschreibe ich Untersuchungen, bei denen Zwei-Farben-Fluoreszenzkreuz- korrelationsspektroskopie (DC FCCS) genutzt wurde, um Informationen uber die Wechselwirkung und den Austausch von dispergierten, nanometergroen Kolloiden zu bekommen. Zunachst habe ich den Prozess der Polymernanopartikelherstellung aus Emul- sionstropfen untersucht, welcher einen der am haugsten angewendeten Prozesse der Nanopartikelformulierung darstellt. Ich konnte zeigen, dass mit DC FCCS eindeutig und direkt Koaleszenz zwischen Emulsionstropfen gemessen werden kann. Dies ist von Interesse, da Koaleszenz als Hauptgrund fur die breite Groenverteilung der nalen Nanopartikel angesehen wird. Weiterhin habe ich den Austausch von Mizellen bildenden Molekulen zwischen amphiphilen Diblock Kopolymermizellen untersucht. Als Modellsystem diente ein Linear-Burste Block Kopolymer, welches Mizellen mit einer dichten und kurzen Korona bildet. Mit Hilfe von DC FCCS konnte der Austausch in verschiedenen Losungsmitteln und bei verschiedenen Temperaturen beobachtet werden. Ich habe herausgefunden, dass in Abhangigkeit der Qualitat des Losungsmittels die Zeit des Austausches um Groenordnungen verschoben werden kann, was eine weitreichende Einstellung der Austauschkinetik ermoglicht. Eine Eigenschaft die all diese Kolloide gemeinsam haben ist ihre Polydispersitat. Im letzten Teil meiner Arbeit habe ich am Beispiel von Polymeren als Modellsystem untersucht, welchen Eekt Polydispersitat und die Art der Fluoreszenzmarkierung auf FCS Experimente haben. Eine Anpassung des klassischen FCS Modells kann die FCS Korrelationskurven dieser Systeme beschreiben. Die Richtigkeit meines Ansatzes habe ich mit dem Vergleich zur Gel-Permeations-Chromatographie und Brownschen Molekulardynamiksimulationen bestatigt.
Resumo:
Diese Arbeit widmet sich der Untersuchung der photophysikalischen Prozesse, die in Mischungen von Elektronendonoren mit Elektronenakzeptoren zur Anwendung in organischen Solarzellen auftreten. Als Elektronendonoren werden das Copolymer PBDTTT-C, das aus Benzodithiophen- und Thienothiophene-Einheiten besteht, und das kleine Molekül p-DTS(FBTTh2)2, welches Silizium-überbrücktes Dithiophen, sowie fluoriertes Benzothiadiazol und Dithiophen beinhaltet, verwendet. Als Elektronenakzeptor finden ein planares 3,4:9,10-Perylentetracarbonsäurediimid-(PDI)-Derivat und verschiedene Fullerenderivate Anwendung. PDI-Derivate gelten als vielversprechende Alternativen zu Fullerenen aufgrund der durch chemische Synthese abstimmbaren strukturellen, optischen und elektronischen Eigenschaften. Das gewichtigste Argument für PDI-Derivate ist deren Absorption im sichtbaren Bereich des Sonnenspektrums was den Photostrom verbessern kann. Fulleren-basierte Mischungen übertreffen jedoch für gewöhnlich die Effizienz von Donor-PDI-Mischungen.rnUm den Nachteil der PDI-basierten Mischungen im Vergleich zu den entsprechenden Fulleren-basierten Mischungen zu identifizieren, werden die verschiedenen Donor-Akzeptor-Kombinationen auf ihre optischen, elektronischen und strukturellen Eigenschaften untersucht. Zeitaufgelöste Spektroskopie, vor allem transiente Absorptionsspektroskopie (TA), wird zur Analyse der Ladungsgeneration angewendet und der Vergleich der Donor-PDI Mischfilme mit den Donor-Fulleren Mischfilmen zeigt, dass die Bildung von Ladungstransferzuständen einen der Hauptverlustkanäle darstellt.rnWeiterhin werden Mischungen aus PBDTTT-C und [6,6]-Phenyl-C61-buttersäuremethylesther (PC61BM) mittels TA-Spektroskopie auf einer Zeitskala von ps bis µs untersucht und es kann gezeigt werden, dass der Triplettzustand des Polymers über die nicht-geminale Rekombination freier Ladungen auf einer sub-ns Zeitskala bevölkert wird. Hochentwickelte Methoden zur Datenanalyse, wie multivariate curve resolution (MCR), werden angewendet um überlagernde Datensignale zu trennen. Zusätzlich kann die Regeneration von Ladungsträgern durch Triplett-Triplett-Annihilation auf einer ns-µs Zeitskala gezeigt werden. Darüber hinaus wird der Einfluss des Lösungsmitteladditivs 1,8-Diiodooctan (DIO) auf die Leistungsfähigkeit von p-DTS(FBTTh2)2:PDI Solarzellen untersucht. Die Erkenntnisse von morphologischen und photophysikalischen Experimenten werden kombiniert, um die strukturellen Eigenschaften und die Photophysik mit den relevanten Kenngrößen des Bauteils in Verbindung zu setzen. Zeitaufgelöste Photolumineszenzmessungen (time-resolved photoluminescence, TRPL) zeigen, dass der Einsatz von DIO zu einer geringeren Reduzierung der Photolumineszenz führt, was auf eine größere Phasentrennung zurückgeführt werden kann. Außerdem kann mittels TA Spektroskopie gezeigt werden, dass die Verwendung von DIO zu einer verbesserten Kristallinität der aktiven Schicht führt und die Generation freier Ladungen fördert. Zur genauen Analyse des Signalzerfalls wird ein Modell angewendet, das den gleichzeitigen Zerfall gebundener CT-Zustände und freier Ladungen berücksichtigt und optimierte Donor-Akzeptor-Mischungen zeigen einen größeren Anteil an nicht-geminaler Rekombination freier Ladungsträger.rnIn einer weiteren Fallstudie wird der Einfluss des Fullerenderivats, namentlich IC60BA und PC71BM, auf die Leistungsfähigkeit und Photophysik der Solarzellen untersucht. Eine Kombination aus einer Untersuchung der Struktur des Dünnfilms sowie zeitaufgelöster Spektroskopie ergibt, dass Mischungen, die ICBA als Elektronenakzeptor verwenden, eine schlechtere Trennung von Ladungstransferzuständen zeigen und unter einer stärkeren geminalen Rekombination im Vergleich zu PCBM-basierten Mischungen leiden. Dies kann auf die kleinere Triebkraft zur Ladungstrennung sowie auf die höhere Unordnung der ICBA-basierten Mischungen, die die Ladungstrennung hemmen, zurückgeführt werden. Außerdem wird der Einfluss reiner Fullerendomänen auf die Funktionsfähigkeit organischer Solarzellen, die aus Mischungen des Thienothienophen-basierenden Polymers pBTTT-C14 und PC61BM bestehen, untersucht. Aus diesem Grund wird die Photophysik von Filmen mit einem Donor-Akzeptor-Mischungsverhältnis von 1:1 sowie 1:4 verglichen. Während 1:1-Mischungen lediglich eine co-kristalline Phase, in der Fullerene zwischen den Seitenketten von pBTTT interkalieren, zeigen, resultiert der Überschuss an Fulleren in den 1:4-Proben in der Ausbildung reiner Fullerendomänen zusätzlich zu der co kristallinen Phase. Transiente Absorptionsspektroskopie verdeutlicht, dass Ladungstransferzustände in 1:1-Mischungen hauptsächlich über geminale Rekombination zerfallen, während in 1:4 Mischungen ein beträchtlicher Anteil an Ladungen ihre wechselseitige Coulombanziehung überwinden und freie Ladungsträger bilden kann, die schließlich nicht-geminal rekombinieren.
Resumo:
The Bedouin of South Sinai have been significantly affected by the politics of external powers for a long time. However, never had the interest of external powers in Sinai been so strong as since the Israeli-Egyptian wars in the second half of the 20th century when Bedouin interests started to collide with Egypt’s plans for a development of luxury tourism in South Sinai. rnrnThe tourism boom that has started in the 1980s has brought economic and infrastructure development to the Bedouin and tourism has become the most important source of income for the Bedouin. However, while the absolute increase of tourists to Sinai has trickled down to the Bedouin to some extent, the participation of Bedouin in the overall tourism development is under-proportionate. Moreover, the Bedouin have become increasingly dependent on monetary income and consequently from tourism as the only significant source of income while at the same time they have lost much of their land as well as their self-determination.rnrnIn this context, the Bedouin livelihoods have become very vulnerable due to repeated depressions in the tourism industry as well as marginalization. Major marginalization processes the Bedouin are facing are the loss of land, barriers to market entry, especially increasingly strict rules and regulations in the tourism industry, as well as discrimination by the authorities. Social differentiation and Bedouin preferences are identified as further factors in Bedouin marginalization.rnrnThe strategies Bedouin have developed in response to all these problems are coping strategies, which try to deal with the present problem at the individual level. Basically no strategies have been developed at the collective level that would aim to actively shape the Bedouin’s present and future. Collective action has been hampered by a variety of factors, such as the speed of the developments, the distribution of power or the decay of tribal structures.rnWhile some Bedouin might be able to continue their tourism activities, a large number of informal jobs will not be feasible anymore. The majority of the previously mostly self-employed Bedouin will probably be forced to work as day-laborers who will have lost much of their pride, dignity, sovereignty and freedom. Moreover, with a return to subsistence being impossible for the majority of the Bedouin, it is likely that an increasing number of marginalized Bedouin will turn to illegal income generating activities such as smuggling or drug cultivation. This in turn will lead to further repression and discrimination and could escalate in a serious violent conflict between the Bedouin and the government.rnrnDevelopment plans and projects should address the general lack of civil rights, local participation and protection of minorities in Egypt and promote Bedouin community development and the consideration of Bedouin interests in tourism development.rnrnWether the political upheavals and the resignation of president Mubarak at the beginning of 2011 will have a positive effect on the situation of the Bedouin remains to be seen.rn