836 resultados para Data fusion applications
Resumo:
This thesis was focused on the investigation of the linear optical properties of novel two photon absorbers for biomedical applications. Substituted imidazole and imidazopyridine derivatives, and organic dendrimers were studied as potential fluorophores for two photon bioimaging. The results obtained showed superior luminescence properties for sulphonamido imidazole derivatives compared to other substituted imidazoles. Imidazo[1,2-a]pyridines exhibited an important dependence on the substitution pattern of their luminescence properties. Substitution at imidazole ring led to a higher fluorescence yield than the substitution at the pyridine one. Bis-imidazo[1,2-a]pyridines of Donor-Acceptor-Donor type were examined. Bis-imidazo[1,2-a]pyridines dimerized at C3 position had better luminescence properties than those dimerized at C5, displaying high emission yields and important 2PA cross sections. Phosphazene-based dendrimers with fluorene branches and cationic charges on the periphery were also examined. Due to aggregation phenomena in polar solvents, the dendrimers registered a significant loss of luminescence with respect to fluorene chromophore model. An improved design of more rigid chromophores yields enhanced luminescence properties which, connected to large 2PA cross-sections, make this compounds valuable as fluorophores in bioimaging. The photophysical study of several ketocoumarine initiators, designed for the fabrication of small dimension prostheses by two photon polymerization (2PP) was carried out. The compounds showed low emission yields, indicative of a high population of the triplet excited state, which is the active state in producing the reactive species. Their efficiency in 2PP was proved by fabrication of microstructures and their biocompatibility was tested in the collaborator’s laboratory. In the frame of the 2PA photorelease of drugs, three fluorene-based dyads have been investigated. They were designed to release the gamma-aminobutyric acid via two photon induced electron transfer. The experimental data in polar solvents showed a fast electron transfer followed by an almost equally fast back electron transfer process, which indicate a poor optimization of the system.
Resumo:
Nanotechnologies are rapidly expanding because of the opportunities that the new materials offer in many areas such as the manufacturing industry, food production, processing and preservation, and in the pharmaceutical and cosmetic industry. Size distribution of the nanoparticles determines their properties and is a fundamental parameter that needs to be monitored from the small-scale synthesis up to the bulk production and quality control of nanotech products on the market. A consequence of the increasing number of applications of nanomaterial is that the EU regulatory authorities are introducing the obligation for companies that make use of nanomaterials to acquire analytical platforms for the assessment of the size parameters of the nanomaterials. In this work, Asymmetrical Flow Field-Flow Fractionation (AF4) and Hollow Fiber F4 (HF5), hyphenated with Multiangle Light Scattering (MALS) are presented as tools for a deep functional characterization of nanoparticles. In particular, it is demonstrated the applicability of AF4-MALS for the characterization of liposomes in a wide series of mediums. Afterwards the technique is used to explore the functional features of a liposomal drug vector in terms of its biological and physical interaction with blood serum components: a comprehensive approach to understand the behavior of lipid vesicles in terms of drug release and fusion/interaction with other biological species is described, together with weaknesses and strength of the method. Afterwards the size characterization, size stability, and conjugation of azidothymidine drug molecules with a new generation of metastable drug vectors, the Metal Organic Frameworks, is discussed. Lastly, it is shown the applicability of HF5-ICP-MS for the rapid screening of samples of relevant nanorisk: rather than a deep and comprehensive characterization it this time shown a quick and smart methodology that within few steps provides qualitative information on the content of metallic nanoparticles in tattoo ink samples.
Resumo:
The recent advent of Next-generation sequencing technologies has revolutionized the way of analyzing the genome. This innovation allows to get deeper information at a lower cost and in less time, and provides data that are discrete measurements. One of the most important applications with these data is the differential analysis, that is investigating if one gene exhibit a different expression level in correspondence of two (or more) biological conditions (such as disease states, treatments received and so on). As for the statistical analysis, the final aim will be statistical testing and for modeling these data the Negative Binomial distribution is considered the most adequate one especially because it allows for "over dispersion". However, the estimation of the dispersion parameter is a very delicate issue because few information are usually available for estimating it. Many strategies have been proposed, but they often result in procedures based on plug-in estimates, and in this thesis we show that this discrepancy between the estimation and the testing framework can lead to uncontrolled first-type errors. We propose a mixture model that allows each gene to share information with other genes that exhibit similar variability. Afterwards, three consistent statistical tests are developed for differential expression analysis. We show that the proposed method improves the sensitivity of detecting differentially expressed genes with respect to the common procedures, since it is the best one in reaching the nominal value for the first-type error, while keeping elevate power. The method is finally illustrated on prostate cancer RNA-seq data.
Resumo:
The present dissertation aims to explore, theoretically and experimentally, the problems and the potential advantages of different types of power converters for “Smart Grid” applications, with particular emphasis on multi-level architectures, which are attracting a rising interest even for industrial requests. The models of the main multilevel architectures (Diode-Clamped and Cascaded) are shown. The best suited modulation strategies to function as a network interface are identified. In particular, the close correlation between PWM (Pulse Width Modulation) approach and SVM (Space Vector Modulation) approach is highlighted. An innovative multilevel topology called MMC (Modular Multilevel Converter) is investigated, and the single-phase, three-phase and "back to back" configurations are analyzed. Specific control techniques that can manage, in an appropriate way, the charge level of the numerous capacitors and handle the power flow in a flexible way are defined and experimentally validated. Another converter that is attracting interest in “Power Conditioning Systems” field is the “Matrix Converter”. Even in this architecture, the output voltage is multilevel. It offers an high quality input current, a bidirectional power flow and has the possibility to control the input power factor (i.e. possibility to participate to active and reactive power regulations). The implemented control system, that allows fast data acquisition for diagnostic purposes, is described and experimentally verified.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
Diese Dissertation demonstriert und verbessert die Vorhersagekraft der Coupled-Cluster-Theorie im Hinblick auf die hochgenaue Berechnung von Moleküleigenschaften. Die Demonstration erfolgt mittels Extrapolations- und Additivitätstechniken in der Single-Referenz-Coupled-Cluster-Theorie, mit deren Hilfe die Existenz und Struktur von bisher unbekannten Molekülen mit schweren Hauptgruppenelementen vorhergesagt wird. Vor allem am Beispiel von cyclischem SiS_2, einem dreiatomigen Molekül mit 16 Valenzelektronen, wird deutlich, dass die Vorhersagekraft der Theorie sich heutzutage auf Augenhöhe mit dem Experiment befindet: Theoretische Überlegungen initiierten eine experimentelle Suche nach diesem Molekül, was schließlich zu dessen Detektion und Charakterisierung mittels Rotationsspektroskopie führte. Die Vorhersagekraft der Coupled-Cluster-Theorie wird verbessert, indem eine Multireferenz-Coupled-Cluster-Methode für die Berechnung von Spin-Bahn-Aufspaltungen erster Ordnung in 2^Pi-Zuständen entwickelt wird. Der Fokus hierbei liegt auf Mukherjee's Variante der Multireferenz-Coupled-Cluster-Theorie, aber prinzipiell ist das vorgeschlagene Berechnungsschema auf alle Varianten anwendbar. Die erwünschte Genauigkeit beträgt 10 cm^-1. Sie wird mit der neuen Methode erreicht, wenn Ein- und Zweielektroneneffekte und bei schweren Elementen auch skalarrelativistische Effekte berücksichtigt werden. Die Methode eignet sich daher in Kombination mit Coupled-Cluster-basierten Extrapolations-und Additivitätsschemata dafür, hochgenaue thermochemische Daten zu berechnen.
Resumo:
Die vorliegende Arbeit befasst sich mit der Synthese und Charakterisierung von Polymeren mit redox-funktionalen Phenothiazin-Seitenketten. Phenothiazin und seine Derivate sind kleine Redoxeinheiten, deren reversibles Redoxverhalten mit electrochromen Eigenschaften verbunden ist. Das besondere an Phenothiazine ist die Bildung von stabilen Radikalkationen im oxidierten Zustand. Daher können Phenothiazine als bistabile Moleküle agieren und zwischen zwei stabilen Redoxzuständen wechseln. Dieser Schaltprozess geht gleichzeitig mit einer Farbveränderung an her.rnrnIm Rahmen dieser Arbeit wird die Synthese neuartiger Phenothiazin-Polymere mittels radikalischer Polymerisation beschrieben. Phenothiazin-Derivate wurden kovalent an aliphatischen und aromatischen Polymerketten gebunden. Dies erfolgte über zwei unterschiedlichen synthetischen Routen. Die erste Route beinhaltet den Einsatz von Vinyl-Monomeren mit Phenothiazin Funktionalität zur direkten Polymerisation. Die zweite Route verwendet Amin modifizierte Phenothiazin-Derivate zur Funktionalisierung von Polymeren mit Aktivester-Seitenketten in einer polymeranalogen Reaktion. rnrnPolymere mit redox-funktionalen Phenothiazin-Seitenketten sind aufgrund ihrer Elektron-Donor-Eigenschaften geeignete Kandidaten für die Verwendung als Kathodenmaterialien. Zur Überprüfung ihrer Eignung wurden Phenothiazin-Polymere als Elektrodenmaterialien in Lithium-Batteriezellen eingesetzt. Die verwendeten Polymere wiesen gute Kapazitätswerte von circa 50-90 Ah/kg sowie schnelle Aufladezeiten in der Batteriezelle auf. Besonders die Aufladezeiten sind 5-10 mal höher als konventionelle Lithium-Batterien. Im Hinblick auf Anzahl der Lade- und Entladezyklen, erzielten die Polymere gute Werte in den Langzeit-Stabilitätstests. Insgesamt überstehen die Polymere 500 Ladezyklen mit geringen Veränderungen der Anfangswerte bezüglich Ladezeiten und -kapazitäten. Die Langzeit-Stabilität hängt unmittelbar mit der Radikalstabilität zusammen. Eine Stabilisierung der Radikalkationen gelang durch die Verlängerung der Seitenkette am Stickstoffatom des Phenothiazins und der Polymerhauptkette. Eine derartige Alkyl-Substitution erhöht die Radikalstabilität durch verstärkte Wechselwirkung mit dem aromatischen Ring und verbessert somit die Batterieleistung hinsichtlich der Stabilität gegenüber Lade- und Entladezyklen. rnrnDes Weiteren wurde die praktische Anwendung von bistabilen Phenothiazin-Polymeren als Speichermedium für hohe Datendichten untersucht. Dazu wurden dünne Filme des Polymers auf leitfähigen Substraten elektrochemisch oxidiert. Die elektrochemische Oxidation erfolgte mittels Rasterkraftmikroskopie in Kombination mit leitfähigen Mikroskopspitzen. Mittels dieser Technik gelang es, die Oberfläche des Polymers im nanoskaligen Bereich zu oxidieren und somit die lokale Leitfähigkeit zu verändern. Damit konnten unterschiedlich große Muster lithographisch beschrieben und aufgrund der Veränderung ihrer Leitfähigkeit detektiert werden. Der Schreibprozess führte nur zu einer Veränderung der lokalen Leitfähigkeit ohne die topographische Beschaffenheit des Polymerfilms zu beeinflussen. Außerdem erwiesen sich die Muster als besonders stabil sowohl mechanisch als auch über die Zeit.rnrnZum Schluss wurden neue Synthesestrategien entwickelt um mechanisch stabile als auch redox-funktionale Oberflächen zu produzieren. Mit Hilfe der oberflächen-initiierten Atomtransfer-Radikalpolymerisation wurden gepfropfte Polymerbürsten mit redox-funktionalen Phenothiazin-Seitenketten hergestellt und mittels Röntgenmethoden und Rasterkraftmikroskopie analysiert. Eine der Synthesestrategien geht von gepfropften Aktivesterbürsten aus, die anschließend in einem nachfolgenden Schritt mit redox-funktionalen Gruppen modifiziert werden können. Diese Vorgehensweise ist besonders vielversprechend und erlaubt es unterschiedliche funktionelle Gruppen an den Aktivesterbürsten zu verankern. Damit können durch Verwendung von vernetzenden Gruppen neben den Redoxeigenschaften, die mechanische Stabilität solcher Polymerfilme optimiert werden. rn rn
Resumo:
One of the most undervalued problems by smartphone users is the security of data on their mobile devices. Today smartphones and tablets are used to send messages and photos and especially to stay connected with social networks, forums and other platforms. These devices contain a lot of private information like passwords, phone numbers, private photos, emails, etc. and an attacker may choose to steal or destroy this information. The main topic of this thesis is the security of the applications present on the most popular stores (App Store for iOS and Play Store for Android) and of their mechanisms for the management of security. The analysis is focused on how the architecture of the two systems protects users from threats and highlights the real presence of malware and spyware in their respective application stores. The work described in subsequent chapters explains the study of the behavior of 50 Android applications and 50 iOS applications performed using network analysis software. Furthermore, this thesis presents some statistics about malware and spyware present on the respective stores and the permissions they require. At the end the reader will be able to understand how to recognize malicious applications and which of the two systems is more suitable for him. This is how this thesis is structured. The first chapter introduces the security mechanisms of the Android and iOS platform architectures and the security mechanisms of their respective application stores. The Second chapter explains the work done, what, why and how we have chosen the tools needed to complete our analysis. The third chapter discusses about the execution of tests, the protocol followed and the approach to assess the “level of danger” of each application that has been checked. The fourth chapter explains the results of the tests and introduces some statistics on the presence of malicious applications on Play Store and App Store. The fifth chapter is devoted to the study of the users, what they think about and how they might avoid malicious applications. The sixth chapter seeks to establish, following our methodology, what application store is safer. In the end, the seventh chapter concludes the thesis.
Resumo:
Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.
Resumo:
Das Gebiet der drahtlosen Kommunikationsanwendungen befindet sich in einem permanenten Entwicklungsprozess (Mobilfunkstandards: GSM/UMTS/LTE/5G, glo-bale Navigationssatellitensysteme (GNSS): GPS, GLONASS, Galileo, Beidou) zu immer höheren Datenraten und zunehmender Miniaturisierung, woraus ein hoher Bedarf für neue, optimierte Hochfrequenzmaterialien resultiert. Diese Entwicklung zeigt sich besonders in den letzten Jahren in der zunehmenden Entwicklung und Anzahl von Smartphones, welche verschiedene Technologien mit unterschiedlichen Arbeitsfrequenzen innerhalb eines Geräts kombinieren (data: 1G-4G, GPS, WLAN, Bluetooth). Die für zukünftige Technologien (z.B. 5G) benötigte Performance-steigerung kann durch die Verwendung von auf MIMO basierenden Antennensystemen realisiert werden (multiple-input & multiple-output, gesteuerte Kombination von mehreren Antennen) für welche auf dielectric Loading basierende Technologien als eine der vielversprechendsten Implementierungslösungen angesehen werden. rnDas Ziel dieser Arbeit war die Entwicklung einer geeigneten paraelektrischen Glaskeramik ($varepsilon_{r}$ > 20, $Qf$ > 5000 GHz, |$tau_f$| < 20 ppm/K; im GHz Frequenzbe-reich) im $mathrm{La_{2}O_{3}}$-$mathrm{TiO_{2}}$-$mathrm{SiO_{2}}$-$mathrm{B_{2}O_{3}}$-System für auf dielectric Loading basierende Mobilfunkkommunikationstechnologien als Alternative zu existierenden kommerziell genutzten Sinterkeramiken. Der Fokus lag hierbei auf der Frage, wie die makroskopi-schen dielektrischen Eigenschaften der Glaskeramik mit ihrer Mikrostruktur korreliert bzw. modifiziert werden können. Es konnte gezeigt werden, dass die dielektrischen Materialanforderungen durch das untersuchte System erfüllt werden und dass auf Glaskeramik basierende Dielektrika weitere vorteilhafte nichtelektro-nische Eigenschaften gegenüber gesinterten Keramiken besitzen, womit dielektrische Glaskeramiken durchaus als geeignete Alternative angesehen werden können. rnEin stabiles Grünglas mit minimalen Glasbildneranteil wurde entwickelt und die chemische Zusammensetzung bezüglich Entglasung und Redoxinstabilitäten optimiert. Geeignete Dotierungen für dielektrisch verlustarme $mathrm{TiO_{2}}$-haltige Glaskeramiken wurden identifiziert.rnDer Einfluss der Schmelzbedingungen auf die Keimbildung wurde untersucht und der Keramisierungsprozess auf einen maximalen Anteil der gewünschten Kristallphasen optimiert um optimale dielektrische Eigenschaften zu erhalten. Die mikroskopische Struktur der Glaskeramiken wurde analysiert und ihr Einfluss auf die makroskopischen dielektrischen Eigenschaften bestimmt. Die Hochfrequenzverlustmechanismen wurden untersucht und Antennen-Prototypenserien wurden analysiert um die Eignung von auf Glaskeramik basierenden Dielektrika für die Verwendung in dielectric Loading Anwendungen zu zeigen.
Resumo:
In the current work, three studies about non-aqueous dispersions of particles were carried out by using an amphiphilic block copolymer poly(isoprene)-block-poly(methyl methacrylate) (PI-b-PMMA) as stabilizer:rn1. Dispersions of polyurethane and polyurea porous particles for polymer compositesrn2. Dispersions of PMMA and PU particles with PDI dye for study of Single Molecule Spectroscopy Detectionrn3. Dispersions of graphene nanosheets for polymer compositesrnrnIn the first study, highly porous polyurethane and polyurea particles were prepared in a non-aqueous emulsion. The preparation of porous particles consisted of two parts: At first, a system was developed where the emulsion had high stability for the polymerization among diisocyanate, diol and water. In the second part, porous particles were prepared by using two methods fission/fusion and combination by which highly porous particles were obtained. In this study, the applications of porous particles were also investigated where polyurethane particles were tested as filling material for polymer composites and as catalyst carrier for polyethylene polymerization. rnrnIn the second study, PMMA and PU particles from one non-aqueous emulsion were investigated via single molecule fluorescence detection. At first the particles were loaded with PDI dye, which were detected by fluorescence microscopy. The distribution and orientation of the PDI molecules in the particles were successfully observed by Single Molecule Fluorescence Detection. The molecules were homogenously distributed inside of the particles. In addition they had random orientation, meaning that no aggregations of dye molecules were formed. With the results, it could be supposed that the polymer chains were also homogenously distributed in the particles, and that the conformation was relatively flexible. rnrnIn the third part of the study, graphene nanosheets with high surface area were dispersed in an organic solvent with low boiling point and low toxicity, THF, stabilized with a block copolymer PI-b-PMMA. The dispersion was used to prepare polymer composites. It was shown that the modified graphene nanosheets had good compatibility with the PS and PMMA matrices. rn
Resumo:
Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.
Resumo:
In the last years technologies related to photovoltaic energy have rapidly developed and the interest on renewable energy power source substantially increased. In particular, cost reduction and appropriate feed-in tariff contributed to the increase of photovoltaic installation, especially in Germany and Italy. However, for several technologies, the observed experimental efficiency of solar cells is still far from the theoretical maximum efficiency, and thus there is still room for improvement. In this framework the research and development of new materials and new solar devices is mandatory. In this thesis the morphological and optical properties of thin films of nanocrystalline silicon oxynitride (nc-SiON) have been investigated. This material has been studied in view of its application in Si based heterojunction solar cells (HIT). Actually, a-Si:H is used now in these cells as emitter layer. Amorphous SiO_x N_y has already shown excellent properties, such as: electrical conductivity, optical energy gap and transmittance higher than the ones of a-Si:H. Nc-SiO_x N_y has never been investigated up to now, but its properties can surpass the ones of amorphous SiON. The films of nc-SiON have been deposited at the University of Konstanz (Germany). The properties of these films have been studied using of atomic force microscopy and optical spectroscopy methods. This material is highly complex as it is made by different coexisting phases. The main purpose of this thesis is the development of methods for the analyses of morphological and optical properties of nc-SiON and the study of the reliability of those methods to the measurement of the characteristics of these silicon films. The collected data will be used to understand the evolution of the properties of nc-SiON, as a function of the deposition parameters. The results here obtained show that nc-SiON films have better properties with respect to both a-Si:H and a-SiON, i. e. higher optical band-gap and transmittance. In addition, the analysis of the variation of the observed properties as a function of the deposition parameters allows for the optimization of deposition conditions for obtaining optimal efficiency of a HIT cell with SiON layer.